public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target.
@ 2023-07-18 11:06 Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 1/8] LoongArch: Added Loongson SX vector directive compilation framework Chenghui Pan
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua, Chenghui Pan

This is an update of https://gcc.gnu.org/pipermail/gcc-patches/2023-June/623262.html
In addition, LSX/LASX instructions support is added in the master branch of binutils-gdb,
and these GCC patches can be used with future releases of binutils-gdb.

Changes since v1:

- Some usages of "unspec" in lsx.md and lasx.md are replaced with arithmetic
  RTL expressions.
- Support ADDR_REG_REG in LSX and LASX.
- Constraint docs are appended in gcc/doc/md.texi and head comment block of
  gcc/config/loongarch/constraints.md.
- Codes related to vecarg in loongarch.cc and loongarch.opt.in are removed in v2.
- Testsuite of LSX and LASX is added in v2, and can be called by using
  loongarch-vector.exp independently.
- Adjust the loongarch_expand_vector_init() function to reduce instruction
  amount when initializing the vector with element-by-element style.
- Some minor implementation changes of RTL templates in lsx.md and lasx.md.

Lulu Cheng (8):
  LoongArch: Added Loongson SX vector directive compilation framework.
  LoongArch: Added Loongson SX base instruction support.
  LoongArch: Added Loongson SX directive builtin function support.
  LoongArch: Added Loongson ASX vector directive compilation framework.
  LoongArch: Added Loongson ASX base instruction support.
  LoongArch: Added Loongson ASX directive builtin function support.
  LoongArch: Add Loongson SX directive test cases.
  LoongArch: Add Loongson ASX directive test cases.

 gcc/config.gcc                                |     2 +-
 gcc/config/loongarch/constraints.md           |   131 +-
 .../loongarch/genopts/loongarch-strings       |     4 +
 gcc/config/loongarch/genopts/loongarch.opt.in |    12 +-
 gcc/config/loongarch/lasx.md                  |  5120 +++
 gcc/config/loongarch/lasxintrin.h             |  5342 +++
 gcc/config/loongarch/loongarch-builtins.cc    |  2686 +-
 gcc/config/loongarch/loongarch-c.cc           |    18 +
 gcc/config/loongarch/loongarch-def.c          |     6 +
 gcc/config/loongarch/loongarch-def.h          |     9 +-
 gcc/config/loongarch/loongarch-driver.cc      |    10 +
 gcc/config/loongarch/loongarch-driver.h       |     2 +
 gcc/config/loongarch/loongarch-ftypes.def     |   666 +-
 gcc/config/loongarch/loongarch-modes.def      |    39 +
 gcc/config/loongarch/loongarch-opts.cc        |    89 +-
 gcc/config/loongarch/loongarch-opts.h         |     3 +
 gcc/config/loongarch/loongarch-protos.h       |    35 +
 gcc/config/loongarch/loongarch-str.h          |     3 +
 gcc/config/loongarch/loongarch.cc             |  4669 +-
 gcc/config/loongarch/loongarch.h              |   117 +-
 gcc/config/loongarch/loongarch.md             |    56 +-
 gcc/config/loongarch/loongarch.opt            |    12 +-
 gcc/config/loongarch/lsx.md                   |  4479 ++
 gcc/config/loongarch/lsxintrin.h              |  5181 +++
 gcc/config/loongarch/predicates.md            |   333 +-
 gcc/doc/md.texi                               |    11 +
 .../gcc.target/loongarch/strict-align.c       |    13 +
 .../vector/lasx/lasx-bit-manipulate.c         | 27813 +++++++++++
 .../loongarch/vector/lasx/lasx-builtin.c      |  1509 +
 .../loongarch/vector/lasx/lasx-cmp.c          |  5361 +++
 .../loongarch/vector/lasx/lasx-fp-arith.c     |  6259 +++
 .../loongarch/vector/lasx/lasx-fp-cvt.c       |  7315 +++
 .../loongarch/vector/lasx/lasx-int-arith.c    | 38361 ++++++++++++++++
 .../loongarch/vector/lasx/lasx-mem.c          |   147 +
 .../loongarch/vector/lasx/lasx-perm.c         |  7730 ++++
 .../vector/lasx/lasx-str-manipulate.c         |   712 +
 .../loongarch/vector/lasx/lasx-xvldrepl.c     |    13 +
 .../loongarch/vector/lasx/lasx-xvstelm.c      |    12 +
 .../loongarch/vector/loongarch-vector.exp     |    42 +
 .../loongarch/vector/lsx/lsx-bit-manipulate.c | 15586 +++++++
 .../loongarch/vector/lsx/lsx-builtin.c        |  1461 +
 .../gcc.target/loongarch/vector/lsx/lsx-cmp.c |  3354 ++
 .../loongarch/vector/lsx/lsx-fp-arith.c       |  3713 ++
 .../loongarch/vector/lsx/lsx-fp-cvt.c         |  4114 ++
 .../loongarch/vector/lsx/lsx-int-arith.c      | 22424 +++++++++
 .../gcc.target/loongarch/vector/lsx/lsx-mem.c |   537 +
 .../loongarch/vector/lsx/lsx-perm.c           |  5555 +++
 .../loongarch/vector/lsx/lsx-str-manipulate.c |   408 +
 .../loongarch/vector/simd_correctness_check.h |    39 +
 49 files changed, 181229 insertions(+), 284 deletions(-)
 create mode 100644 gcc/config/loongarch/lasx.md
 create mode 100644 gcc/config/loongarch/lasxintrin.h
 create mode 100644 gcc/config/loongarch/lsx.md
 create mode 100644 gcc/config/loongarch/lsxintrin.h
 create mode 100644 gcc/testsuite/gcc.target/loongarch/strict-align.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-bit-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-builtin.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-cmp.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-cvt.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-int-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-mem.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-perm.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-str-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvldrepl.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvstelm.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/loongarch-vector.exp
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-bit-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-builtin.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-cmp.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-cvt.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-int-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-mem.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-perm.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-str-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/simd_correctness_check.h

-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/8] LoongArch: Added Loongson SX vector directive compilation framework.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 2/8] LoongArch: Added Loongson SX base instruction support Chenghui Pan
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/ChangeLog:

	* config/loongarch/genopts/loongarch-strings: Added compilation framework.
	* config/loongarch/genopts/loongarch.opt.in: Ditto.
	* config/loongarch/loongarch-c.cc (loongarch_cpu_cpp_builtins): Ditto.
	* config/loongarch/loongarch-def.c: Ditto.
	* config/loongarch/loongarch-def.h (N_ISA_EXT_TYPES): Ditto.
	(ISA_EXT_SIMD_LSX): Ditto.
	(N_SWITCH_TYPES): Ditto.
	(SW_LSX): Ditto.
	(struct loongarch_isa): Ditto.
	* config/loongarch/loongarch-driver.cc (APPEND_SWITCH): Ditto.
	(driver_get_normalized_m_opts): Ditto.
	* config/loongarch/loongarch-driver.h (driver_get_normalized_m_opts): Ditto.
	* config/loongarch/loongarch-opts.cc (loongarch_config_target): Ditto.
	(isa_str): Ditto.
	* config/loongarch/loongarch-opts.h (ISA_HAS_LSX): Ditto.
	* config/loongarch/loongarch-str.h (OPTSTR_LSX): Ditto.
	* config/loongarch/loongarch.opt: Ditto.
---
 .../loongarch/genopts/loongarch-strings       |  3 +
 gcc/config/loongarch/genopts/loongarch.opt.in |  8 +-
 gcc/config/loongarch/loongarch-c.cc           |  7 ++
 gcc/config/loongarch/loongarch-def.c          |  4 +
 gcc/config/loongarch/loongarch-def.h          |  7 +-
 gcc/config/loongarch/loongarch-driver.cc      | 10 +++
 gcc/config/loongarch/loongarch-driver.h       |  1 +
 gcc/config/loongarch/loongarch-opts.cc        | 82 ++++++++++++++++++-
 gcc/config/loongarch/loongarch-opts.h         |  1 +
 gcc/config/loongarch/loongarch-str.h          |  2 +
 gcc/config/loongarch/loongarch.opt            |  8 +-
 11 files changed, 128 insertions(+), 5 deletions(-)

diff --git a/gcc/config/loongarch/genopts/loongarch-strings b/gcc/config/loongarch/genopts/loongarch-strings
index a40998ead97..24a5025061f 100644
--- a/gcc/config/loongarch/genopts/loongarch-strings
+++ b/gcc/config/loongarch/genopts/loongarch-strings
@@ -40,6 +40,9 @@ OPTSTR_SOFT_FLOAT     soft-float
 OPTSTR_SINGLE_FLOAT   single-float
 OPTSTR_DOUBLE_FLOAT   double-float
 
+# SIMD extensions
+OPTSTR_LSX	lsx
+
 # -mabi=
 OPTSTR_ABI_BASE	      abi
 STR_ABI_BASE_LP64D    lp64d
diff --git a/gcc/config/loongarch/genopts/loongarch.opt.in b/gcc/config/loongarch/genopts/loongarch.opt.in
index 4b9b4ac273e..338d77a7e40 100644
--- a/gcc/config/loongarch/genopts/loongarch.opt.in
+++ b/gcc/config/loongarch/genopts/loongarch.opt.in
@@ -76,6 +76,9 @@ m@@OPTSTR_DOUBLE_FLOAT@@
 Target Driver RejectNegative Var(la_opt_switches) Mask(FORCE_F64) Negative(m@@OPTSTR_SOFT_FLOAT@@)
 Allow hardware floating-point instructions to cover both 32-bit and 64-bit operations.
 
+m@@OPTSTR_LSX@@
+Target RejectNegative Var(la_opt_switches) Mask(LSX) Negative(m@@OPTSTR_LSX@@)
+Enable LoongArch SIMD Extension (LSX).
 
 ;; Base target models (implies ISA & tune parameters)
 Enum
@@ -125,11 +128,14 @@ Target RejectNegative Joined ToLower Enum(abi_base) Var(la_opt_abi_base) Init(M_
 Variable
 int la_opt_abi_ext = M_OPTION_NOT_SEEN
 
-
 mbranch-cost=
 Target RejectNegative Joined UInteger Var(loongarch_branch_cost)
 -mbranch-cost=COST	Set the cost of branches to roughly COST instructions.
 
+mmemvec-cost=
+Target RejectNegative Joined UInteger Var(loongarch_vector_access_cost) IntegerRange(1, 5)
+mmemvec-cost=COST      Set the cost of vector memory access instructions.
+
 mcheck-zero-division
 Target Mask(CHECK_ZERO_DIV)
 Trap on integer divide by zero.
diff --git a/gcc/config/loongarch/loongarch-c.cc b/gcc/config/loongarch/loongarch-c.cc
index 67911b78f28..b065921adc3 100644
--- a/gcc/config/loongarch/loongarch-c.cc
+++ b/gcc/config/loongarch/loongarch-c.cc
@@ -99,6 +99,13 @@ loongarch_cpu_cpp_builtins (cpp_reader *pfile)
   else
     builtin_define ("__loongarch_frlen=0");
 
+  if (ISA_HAS_LSX)
+    {
+      builtin_define ("__loongarch_simd");
+      builtin_define ("__loongarch_sx");
+      builtin_define ("__loongarch_sx_width=128");
+    }
+
   /* Native Data Sizes.  */
   builtin_define_with_int_value ("_LOONGARCH_SZINT", INT_TYPE_SIZE);
   builtin_define_with_int_value ("_LOONGARCH_SZLONG", LONG_TYPE_SIZE);
diff --git a/gcc/config/loongarch/loongarch-def.c b/gcc/config/loongarch/loongarch-def.c
index 6729c857f7c..28e24c62249 100644
--- a/gcc/config/loongarch/loongarch-def.c
+++ b/gcc/config/loongarch/loongarch-def.c
@@ -49,10 +49,12 @@ loongarch_cpu_default_isa[N_ARCH_TYPES] = {
   [CPU_LOONGARCH64] = {
       .base = ISA_BASE_LA64V100,
       .fpu = ISA_EXT_FPU64,
+      .simd = 0,
   },
   [CPU_LA464] = {
       .base = ISA_BASE_LA64V100,
       .fpu = ISA_EXT_FPU64,
+      .simd = ISA_EXT_SIMD_LSX,
   },
 };
 
@@ -147,6 +149,7 @@ loongarch_isa_ext_strings[N_ISA_EXT_TYPES] = {
   [ISA_EXT_FPU64] = STR_ISA_EXT_FPU64,
   [ISA_EXT_FPU32] = STR_ISA_EXT_FPU32,
   [ISA_EXT_NOFPU] = STR_ISA_EXT_NOFPU,
+  [ISA_EXT_SIMD_LSX] = OPTSTR_LSX,
 };
 
 const char*
@@ -176,6 +179,7 @@ loongarch_switch_strings[] = {
   [SW_SOFT_FLOAT]	  = OPTSTR_SOFT_FLOAT,
   [SW_SINGLE_FLOAT]	  = OPTSTR_SINGLE_FLOAT,
   [SW_DOUBLE_FLOAT]	  = OPTSTR_DOUBLE_FLOAT,
+  [SW_LSX]		  = OPTSTR_LSX,
 };
 
 
diff --git a/gcc/config/loongarch/loongarch-def.h b/gcc/config/loongarch/loongarch-def.h
index fb8bb88eb52..f34cffcfb9b 100644
--- a/gcc/config/loongarch/loongarch-def.h
+++ b/gcc/config/loongarch/loongarch-def.h
@@ -63,7 +63,8 @@ extern const char* loongarch_isa_ext_strings[];
 #define ISA_EXT_FPU32	      1
 #define ISA_EXT_FPU64	      2
 #define N_ISA_EXT_FPU_TYPES   3
-#define N_ISA_EXT_TYPES	      3
+#define ISA_EXT_SIMD_LSX      3
+#define N_ISA_EXT_TYPES	      4
 
 /* enum abi_base */
 extern const char* loongarch_abi_base_strings[];
@@ -97,7 +98,8 @@ extern const char* loongarch_switch_strings[];
 #define SW_SOFT_FLOAT	      0
 #define SW_SINGLE_FLOAT	      1
 #define SW_DOUBLE_FLOAT	      2
-#define N_SWITCH_TYPES	      3
+#define SW_LSX		      3
+#define N_SWITCH_TYPES	      4
 
 /* The common default value for variables whose assignments
    are triggered by command-line options.  */
@@ -111,6 +113,7 @@ struct loongarch_isa
 {
   unsigned char base;	    /* ISA_BASE_ */
   unsigned char fpu;	    /* ISA_EXT_FPU_ */
+  unsigned char simd;	    /* ISA_EXT_SIMD_ */
 };
 
 struct loongarch_abi
diff --git a/gcc/config/loongarch/loongarch-driver.cc b/gcc/config/loongarch/loongarch-driver.cc
index 11ce082417f..aa5011bd86a 100644
--- a/gcc/config/loongarch/loongarch-driver.cc
+++ b/gcc/config/loongarch/loongarch-driver.cc
@@ -160,6 +160,10 @@ driver_get_normalized_m_opts (int argc, const char **argv)
    APPEND_LTR (" %<m" OPTSTR_##NAME "=* " \
 	       " -m" OPTSTR_##NAME "=")
 
+#undef APPEND_SWITCH
+#define APPEND_SWITCH(S) \
+   APPEND_LTR (" %<m" S " -m" S)
+
   for (int i = 0; i < N_SWITCH_TYPES; i++)
     {
       APPEND_LTR (" %<m");
@@ -175,6 +179,12 @@ driver_get_normalized_m_opts (int argc, const char **argv)
   APPEND_OPT (ISA_EXT_FPU);
   APPEND_VAL (loongarch_isa_ext_strings[la_target.isa.fpu]);
 
+  if (la_target.isa.simd)
+    {
+      APPEND_LTR (" %<m" OPTSTR_LSX " -m");
+      APPEND_VAL (loongarch_isa_ext_strings[la_target.isa.simd]);
+    }
+
   APPEND_OPT (CMODEL);
   APPEND_VAL (loongarch_cmodel_strings[la_target.cmodel]);
 
diff --git a/gcc/config/loongarch/loongarch-driver.h b/gcc/config/loongarch/loongarch-driver.h
index ba8817a4621..db663818b7c 100644
--- a/gcc/config/loongarch/loongarch-driver.h
+++ b/gcc/config/loongarch/loongarch-driver.h
@@ -51,6 +51,7 @@ driver_get_normalized_m_opts (int argc, const char **argv);
   LA_SET_FLAG_SPEC (SOFT_FLOAT)				      \
   LA_SET_FLAG_SPEC (SINGLE_FLOAT)			      \
   LA_SET_FLAG_SPEC (DOUBLE_FLOAT)			      \
+  LA_SET_FLAG_SPEC (LSX)				      \
   " %:get_normalized_m_opts()"
 
 #define DRIVER_SELF_SPECS \
diff --git a/gcc/config/loongarch/loongarch-opts.cc b/gcc/config/loongarch/loongarch-opts.cc
index a52e25236ea..9753cf1290b 100644
--- a/gcc/config/loongarch/loongarch-opts.cc
+++ b/gcc/config/loongarch/loongarch-opts.cc
@@ -83,6 +83,7 @@ const int loongarch_switch_mask[N_SWITCH_TYPES] = {
   /* SW_SOFT_FLOAT */    M(FORCE_SOFTF),
   /* SW_SINGLE_FLOAT */  M(FORCE_F32),
   /* SW_DOUBLE_FLOAT */  M(FORCE_F64),
+  /* SW_LSX */		 M(LSX),
 };
 #undef M
 
@@ -142,7 +143,7 @@ loongarch_config_target (struct loongarch_target *target,
   obstack_init (&msg_obstack);
 
   struct {
-    int arch, tune, fpu, abi_base, abi_ext, cmodel;
+    int arch, tune, fpu, abi_base, abi_ext, cmodel, simd;
   } constrained = {
       M_OPT_ABSENT(opt_arch)     ? 0 : 1,
       M_OPT_ABSENT(opt_tune)     ? 0 : 1,
@@ -150,6 +151,7 @@ loongarch_config_target (struct loongarch_target *target,
       M_OPT_ABSENT(opt_abi_base) ? 0 : 1,
       M_OPT_ABSENT(opt_abi_ext)  ? 0 : 1,
       M_OPT_ABSENT(opt_cmodel)   ? 0 : 1,
+      0
   };
 
 #define on(NAME) ((loongarch_switch_mask[(SW_##NAME)] & opt_switches) \
@@ -251,6 +253,73 @@ config_target_isa:
     ((t.cpu_arch == CPU_NATIVE && constrained.arch) ?
      t.isa.fpu : DEFAULT_ISA_EXT_FPU);
 
+  /* LoongArch SIMD extensions.  */
+  int simd_switch;
+  if (on (LSX))
+    {
+      constrained.simd = 1;
+      switch (on_switch)
+	{
+	  case SW_LSX:
+	    t.isa.simd = ISA_EXT_SIMD_LSX;
+	    break;
+
+	  default:
+	    gcc_unreachable ();
+	}
+    }
+  simd_switch = on_switch;
+
+  /* All SIMD extensions imply a 64-bit FPU:
+     - silently adjust t.isa.fpu to "fpu64" if it is unconstrained.
+     - warn if -msingle-float / -msoft-float is on, then disable SIMD extensions
+     - abort if -mfpu=0 / -mfpu=32 is forced.  */
+
+  if (t.isa.simd != 0 && t.isa.fpu != ISA_EXT_FPU64)
+    {
+    if (!constrained.fpu)
+      {
+	/* As long as the arch-default "t.isa.simd" is set to non-zero
+	   for an element "t" in loongarch_cpu_default_isa, "t.isa.fpu"
+	   should be set to "ISA_EXT_FPU64" accordingly.  Thus reaching
+	   here must be the result of forcing -mlsx/-mlasx explicitly.  */
+	gcc_assert (constrained.simd);
+
+	inform (UNKNOWN_LOCATION,
+		"%<-m%s%> promotes %<%s%> to %<%s%s%>",
+		OPTSTR_ISA_EXT_FPU, loongarch_isa_ext_strings[t.isa.fpu],
+		OPTSTR_ISA_EXT_FPU, loongarch_isa_ext_strings[ISA_EXT_FPU64]);
+
+	t.isa.fpu = ISA_EXT_FPU64;
+      }
+    else if (on (SOFT_FLOAT) || on (SINGLE_FLOAT))
+      {
+	if (constrained.simd)
+	  inform (UNKNOWN_LOCATION,
+		  "%<-m%s%> is disabled by %<-m%s%>, because it requires %<%s%s%>",
+		  loongarch_switch_strings[simd_switch],
+		  loongarch_switch_strings[on_switch],
+		  OPTSTR_ISA_EXT_FPU, loongarch_isa_ext_strings[ISA_EXT_FPU64]);
+
+	/* SIMD that comes from arch default.  */
+	t.isa.simd = 0;
+      }
+    else
+      {
+	/* -mfpu=0 / -mfpu=32 is set.  */
+	if (constrained.simd)
+	  fatal_error (UNKNOWN_LOCATION,
+		       "%<-m%s=%s%> conflicts with %<-m%s%>,"
+		       "which requires %<%s%s%>",
+		       OPTSTR_ISA_EXT_FPU, loongarch_isa_ext_strings[t.isa.fpu],
+		       loongarch_switch_strings[simd_switch],
+		       OPTSTR_ISA_EXT_FPU,
+		       loongarch_isa_ext_strings[ISA_EXT_FPU64]);
+
+	/* Same as above.  */
+	t.isa.simd = 0;
+      }
+    }
 
   /* 4.  ABI-ISA compatibility */
   /* Note:
@@ -530,6 +599,17 @@ isa_str (const struct loongarch_isa *isa, char separator)
       APPEND_STRING (OPTSTR_ISA_EXT_FPU)
       APPEND_STRING (loongarch_isa_ext_strings[isa->fpu])
     }
+
+  switch (isa->simd)
+    {
+      case ISA_EXT_SIMD_LSX:
+	APPEND1 (separator);
+	APPEND_STRING (loongarch_isa_ext_strings[isa->simd]);
+	break;
+
+      default:
+	gcc_assert (isa->simd == 0);
+    }
   APPEND1 ('\0')
 
   /* Add more here.  */
diff --git a/gcc/config/loongarch/loongarch-opts.h b/gcc/config/loongarch/loongarch-opts.h
index b1ff54426e4..d067c05dfc9 100644
--- a/gcc/config/loongarch/loongarch-opts.h
+++ b/gcc/config/loongarch/loongarch-opts.h
@@ -66,6 +66,7 @@ loongarch_config_target (struct loongarch_target *target,
 				   || la_target.abi.base == ABI_BASE_LP64F \
 				   || la_target.abi.base == ABI_BASE_LP64S)
 
+#define ISA_HAS_LSX		  (la_target.isa.simd == ISA_EXT_SIMD_LSX)
 #define TARGET_ARCH_NATIVE	  (la_target.cpu_arch == CPU_NATIVE)
 #define LARCH_ACTUAL_ARCH	  (TARGET_ARCH_NATIVE \
 				   ? (la_target.cpu_native < N_ARCH_TYPES \
diff --git a/gcc/config/loongarch/loongarch-str.h b/gcc/config/loongarch/loongarch-str.h
index af2e82a321f..6fa1b1571c5 100644
--- a/gcc/config/loongarch/loongarch-str.h
+++ b/gcc/config/loongarch/loongarch-str.h
@@ -42,6 +42,8 @@ along with GCC; see the file COPYING3.  If not see
 #define OPTSTR_SINGLE_FLOAT "single-float"
 #define OPTSTR_DOUBLE_FLOAT "double-float"
 
+#define OPTSTR_LSX "lsx"
+
 #define OPTSTR_ABI_BASE "abi"
 #define STR_ABI_BASE_LP64D "lp64d"
 #define STR_ABI_BASE_LP64F "lp64f"
diff --git a/gcc/config/loongarch/loongarch.opt b/gcc/config/loongarch/loongarch.opt
index 68018ade73f..5c7e6d37220 100644
--- a/gcc/config/loongarch/loongarch.opt
+++ b/gcc/config/loongarch/loongarch.opt
@@ -83,6 +83,9 @@ mdouble-float
 Target Driver RejectNegative Var(la_opt_switches) Mask(FORCE_F64) Negative(msoft-float)
 Allow hardware floating-point instructions to cover both 32-bit and 64-bit operations.
 
+mlsx
+Target RejectNegative Var(la_opt_switches) Mask(LSX) Negative(mlsx)
+Enable LoongArch SIMD Extension (LSX).
 
 ;; Base target models (implies ISA & tune parameters)
 Enum
@@ -132,11 +135,14 @@ Target RejectNegative Joined ToLower Enum(abi_base) Var(la_opt_abi_base) Init(M_
 Variable
 int la_opt_abi_ext = M_OPTION_NOT_SEEN
 
-
 mbranch-cost=
 Target RejectNegative Joined UInteger Var(loongarch_branch_cost)
 -mbranch-cost=COST	Set the cost of branches to roughly COST instructions.
 
+mmemvec-cost=
+Target RejectNegative Joined UInteger Var(loongarch_vector_access_cost) IntegerRange(1, 5)
+mmemvec-cost=COST      Set the cost of vector memory access instructions.
+
 mcheck-zero-division
 Target Mask(CHECK_ZERO_DIV)
 Trap on integer divide by zero.
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 2/8] LoongArch: Added Loongson SX base instruction support.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 1/8] LoongArch: Added Loongson SX vector directive compilation framework Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 3/8] LoongArch: Added Loongson SX directive builtin function support Chenghui Pan
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/ChangeLog:

	* config/loongarch/constraints.md (M):Added Loongson LSX base instruction support.
	(N): Ditto.
	(O): Ditto.
	(P): Ditto.
	(R): Ditto.
	(S): Ditto.
	(YG): Ditto.
	(YA): Ditto.
	(YB): Ditto.
	(Yb): Ditto.
	(Yh): Ditto.
	(Yw): Ditto.
	(YI): Ditto.
	(YC): Ditto.
	(YZ): Ditto.
	(Unv5): Ditto.
	(Uuv5): Ditto.
	(Usv5): Ditto.
	(Uuv6): Ditto.
	(Urv8): Ditto.
	* config/loongarch/loongarch-builtins.cc (loongarch_gen_const_int_vector): Ditto.
	* config/loongarch/loongarch-modes.def (VECTOR_MODES): Ditto.
	(VECTOR_MODE): Ditto.
	(INT_MODE): Ditto.
	* config/loongarch/loongarch-protos.h (loongarch_split_move_insn_p): Ditto.
	(loongarch_split_move_insn): Ditto.
	(loongarch_split_128bit_move): Ditto.
	(loongarch_split_128bit_move_p): Ditto.
	(loongarch_split_lsx_copy_d): Ditto.
	(loongarch_split_lsx_insert_d): Ditto.
	(loongarch_split_lsx_fill_d): Ditto.
	(loongarch_expand_vec_cmp): Ditto.
	(loongarch_const_vector_same_val_p): Ditto.
	(loongarch_const_vector_same_bytes_p): Ditto.
	(loongarch_const_vector_same_int_p): Ditto.
	(loongarch_const_vector_shuffle_set_p): Ditto.
	(loongarch_const_vector_bitimm_set_p): Ditto.
	(loongarch_const_vector_bitimm_clr_p): Ditto.
	(loongarch_lsx_vec_parallel_const_half): Ditto.
	(loongarch_gen_const_int_vector): Ditto.
	(loongarch_lsx_output_division): Ditto.
	(loongarch_expand_vector_init): Ditto.
	(loongarch_expand_vec_unpack): Ditto.
	(loongarch_expand_vec_perm): Ditto.
	(loongarch_expand_vector_extract): Ditto.
	(loongarch_expand_vector_reduc): Ditto.
	(loongarch_ldst_scaled_shift): Ditto.
	(loongarch_expand_vec_cond_expr): Ditto.
	(loongarch_expand_vec_cond_mask_expr): Ditto.
	(loongarch_builtin_vectorized_function): Ditto.
	(loongarch_gen_const_int_vector_shuffle): Ditto.
	(loongarch_build_signbit_mask): Ditto.
	* config/loongarch/loongarch.cc (loongarch_pass_aggregate_num_fpr): Ditto.
	(loongarch_setup_incoming_varargs): Ditto.
	(loongarch_emit_move): Ditto.
	(loongarch_const_vector_bitimm_set_p): Ditto.
	(loongarch_const_vector_bitimm_clr_p): Ditto.
	(loongarch_const_vector_same_val_p): Ditto.
	(loongarch_const_vector_same_bytes_p): Ditto.
	(loongarch_const_vector_same_int_p): Ditto.
	(loongarch_const_vector_shuffle_set_p): Ditto.
	(loongarch_symbol_insns): Ditto.
	(loongarch_cannot_force_const_mem): Ditto.
	(loongarch_valid_offset_p): Ditto.
	(loongarch_valid_index_p): Ditto.
	(loongarch_classify_address): Ditto.
	(loongarch_address_insns): Ditto.
	(loongarch_ldst_scaled_shift): Ditto.
	(loongarch_const_insns): Ditto.
	(loongarch_split_move_insn_p): Ditto.
	(loongarch_subword_at_byte): Ditto.
	(loongarch_legitimize_move): Ditto.
	(loongarch_builtin_vectorization_cost): Ditto.
	(loongarch_split_move_p): Ditto.
	(loongarch_split_move): Ditto.
	(loongarch_split_move_insn): Ditto.
	(loongarch_output_move_index_float): Ditto.
	(loongarch_split_128bit_move_p): Ditto.
	(loongarch_split_128bit_move): Ditto.
	(loongarch_split_lsx_copy_d): Ditto.
	(loongarch_split_lsx_insert_d): Ditto.
	(loongarch_split_lsx_fill_d): Ditto.
	(loongarch_output_move): Ditto.
	(loongarch_extend_comparands): Ditto.
	(loongarch_print_operand_reloc): Ditto.
	(loongarch_print_operand): Ditto.
	(loongarch_hard_regno_mode_ok_uncached): Ditto.
	(loongarch_hard_regno_call_part_clobbered): Ditto.
	(loongarch_hard_regno_nregs): Ditto.
	(loongarch_class_max_nregs): Ditto.
	(loongarch_can_change_mode_class): Ditto.
	(loongarch_mode_ok_for_mov_fmt_p): Ditto.
	(loongarch_secondary_reload): Ditto.
	(loongarch_vector_mode_supported_p): Ditto.
	(loongarch_preferred_simd_mode): Ditto.
	(loongarch_autovectorize_vector_modes): Ditto.
	(loongarch_lsx_output_division): Ditto.
	(loongarch_option_override_internal): Ditto.
	(loongarch_hard_regno_caller_save_mode): Ditto.
	(MAX_VECT_LEN): Ditto.
	(loongarch_spill_class): Ditto.
	(struct expand_vec_perm_d): Ditto.
	(loongarch_promote_function_mode): Ditto.
	(loongarch_expand_vselect): Ditto.
	(loongarch_starting_frame_offset): Ditto.
	(loongarch_expand_vselect_vconcat): Ditto.
	(TARGET_ASM_ALIGNED_DI_OP): Ditto.
	(TARGET_OPTION_OVERRIDE): Ditto.
	(TARGET_LEGITIMIZE_ADDRESS): Ditto.
	(loongarch_expand_lsx_shuffle): Ditto.
	(TARGET_ASM_SELECT_RTX_SECTION): Ditto.
	(TARGET_ASM_FUNCTION_RODATA_SECTION): Ditto.
	(TARGET_SCHED_INIT): Ditto.
	(TARGET_SCHED_REORDER): Ditto.
	(TARGET_SCHED_REORDER2): Ditto.
	(TARGET_SCHED_VARIABLE_ISSUE): Ditto.
	(TARGET_SCHED_ADJUST_COST): Ditto.
	(TARGET_SCHED_ISSUE_RATE): Ditto.
	(TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD): Ditto.
	(TARGET_FUNCTION_OK_FOR_SIBCALL): Ditto.
	(TARGET_VALID_POINTER_MODE): Ditto.
	(TARGET_REGISTER_MOVE_COST): Ditto.
	(TARGET_MEMORY_MOVE_COST): Ditto.
	(TARGET_RTX_COSTS): Ditto.
	(TARGET_ADDRESS_COST): Ditto.
	(TARGET_IN_SMALL_DATA_P): Ditto.
	(TARGET_PREFERRED_RELOAD_CLASS): Ditto.
	(TARGET_ASM_FILE_START_FILE_DIRECTIVE): Ditto.
	(loongarch_expand_vec_perm): Ditto.
	(TARGET_EXPAND_BUILTIN_VA_START): Ditto.
	(TARGET_PROMOTE_FUNCTION_MODE): Ditto.
	(TARGET_RETURN_IN_MEMORY): Ditto.
	(TARGET_FUNCTION_VALUE): Ditto.
	(TARGET_LIBCALL_VALUE): Ditto.
	(loongarch_try_expand_lsx_vshuf_const): Ditto.
	(TARGET_ASM_OUTPUT_MI_THUNK): Ditto.
	(TARGET_ASM_CAN_OUTPUT_MI_THUNK): Ditto.
	(TARGET_PRINT_OPERAND): Ditto.
	(TARGET_PRINT_OPERAND_ADDRESS): Ditto.
	(TARGET_PRINT_OPERAND_PUNCT_VALID_P): Ditto.
	(TARGET_SETUP_INCOMING_VARARGS): Ditto.
	(TARGET_STRICT_ARGUMENT_NAMING): Ditto.
	(TARGET_MUST_PASS_IN_STACK): Ditto.
	(TARGET_PASS_BY_REFERENCE): Ditto.
	(TARGET_ARG_PARTIAL_BYTES): Ditto.
	(TARGET_FUNCTION_ARG): Ditto.
	(TARGET_FUNCTION_ARG_ADVANCE): Ditto.
	(TARGET_FUNCTION_ARG_BOUNDARY): Ditto.
	(TARGET_SCALAR_MODE_SUPPORTED_P): Ditto.
	(TARGET_INIT_BUILTINS): Ditto.
	(loongarch_expand_vec_perm_const_1): Ditto.
	(loongarch_expand_vec_perm_const_2): Ditto.
	(loongarch_vectorize_vec_perm_const): Ditto.
	(loongarch_sched_reassociation_width): Ditto.
	(loongarch_expand_vector_extract): Ditto.
	(emit_reduc_half): Ditto.
	(loongarch_expand_vector_reduc): Ditto.
	(loongarch_expand_vec_unpack): Ditto.
	(loongarch_lsx_vec_parallel_const_half): Ditto.
	(loongarch_constant_elt_p): Ditto.
	(loongarch_gen_const_int_vector_shuffle): Ditto.
	(loongarch_expand_vector_init): Ditto.
	(loongarch_expand_lsx_cmp): Ditto.
	(loongarch_expand_vec_cond_expr): Ditto.
	(loongarch_expand_vec_cond_mask_expr): Ditto.
	(loongarch_expand_vec_cmp): Ditto.
	(loongarch_case_values_threshold): Ditto.
	(loongarch_build_const_vector): Ditto.
	(loongarch_build_signbit_mask): Ditto.
	(loongarch_builtin_support_vector_misalignment): Ditto.
	(TARGET_ASM_ALIGNED_HI_OP): Ditto.
	(TARGET_ASM_ALIGNED_SI_OP): Ditto.
	(TARGET_VECTORIZE_BUILTIN_VECTORIZATION_COST): Ditto.
	(TARGET_VECTOR_MODE_SUPPORTED_P): Ditto.
	(TARGET_VECTORIZE_PREFERRED_SIMD_MODE): Ditto.
	(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): Ditto.
	(TARGET_VECTORIZE_VEC_PERM_CONST): Ditto.
	(TARGET_SCHED_REASSOCIATION_WIDTH): Ditto.
	(TARGET_CASE_VALUES_THRESHOLD): Ditto.
	(TARGET_HARD_REGNO_CALL_PART_CLOBBERED): Ditto.
	(TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT): Ditto.
	* config/loongarch/loongarch.h (TARGET_SUPPORTS_WIDE_INT): Ditto.
	(UNITS_PER_LSX_REG): Ditto.
	(BITS_PER_LSX_REG): Ditto.
	(BIGGEST_ALIGNMENT): Ditto.
	(LSX_REG_FIRST): Ditto.
	(LSX_REG_LAST): Ditto.
	(LSX_REG_NUM): Ditto.
	(LSX_REG_P): Ditto.
	(LSX_REG_RTX_P): Ditto.
	(IMM13_OPERAND): Ditto.
	(LSX_SUPPORTED_MODE_P): Ditto.
	* config/loongarch/loongarch.md (unknown,add,sub,not,nor,and,or,xor): Ditto.
	(unknown,add,sub,not,nor,and,or,xor,simd_add): Ditto.
	(unknown,none,QI,HI,SI,DI,TI,SF,DF,TF,FCC): Ditto.
	(mode" ): Ditto.
	(DF): Ditto.
	(SF): Ditto.
	(sf): Ditto.
	(DI): Ditto.
	(SI): Ditto.
	* config/loongarch/predicates.md (const_lsx_branch_operand): Ditto.
	(const_uimm3_operand): Ditto.
	(const_8_to_11_operand): Ditto.
	(const_12_to_15_operand): Ditto.
	(const_uimm4_operand): Ditto.
	(const_uimm6_operand): Ditto.
	(const_uimm7_operand): Ditto.
	(const_uimm8_operand): Ditto.
	(const_imm5_operand): Ditto.
	(const_imm10_operand): Ditto.
	(const_imm13_operand): Ditto.
	(reg_imm10_operand): Ditto.
	(aq8b_operand): Ditto.
	(aq8h_operand): Ditto.
	(aq8w_operand): Ditto.
	(aq8d_operand): Ditto.
	(aq10b_operand): Ditto.
	(aq10h_operand): Ditto.
	(aq10w_operand): Ditto.
	(aq10d_operand): Ditto.
	(aq12b_operand): Ditto.
	(aq12h_operand): Ditto.
	(aq12w_operand): Ditto.
	(aq12d_operand): Ditto.
	(const_m1_operand): Ditto.
	(reg_or_m1_operand): Ditto.
	(const_exp_2_operand): Ditto.
	(const_exp_4_operand): Ditto.
	(const_exp_8_operand): Ditto.
	(const_exp_16_operand): Ditto.
	(const_exp_32_operand): Ditto.
	(const_0_or_1_operand): Ditto.
	(const_0_to_3_operand): Ditto.
	(const_0_to_7_operand): Ditto.
	(const_2_or_3_operand): Ditto.
	(const_4_to_7_operand): Ditto.
	(const_8_to_15_operand): Ditto.
	(const_16_to_31_operand): Ditto.
	(qi_mask_operand): Ditto.
	(hi_mask_operand): Ditto.
	(si_mask_operand): Ditto.
	(d_operand): Ditto.
	(db4_operand): Ditto.
	(db7_operand): Ditto.
	(db8_operand): Ditto.
	(ib3_operand): Ditto.
	(sb4_operand): Ditto.
	(sb5_operand): Ditto.
	(sb8_operand): Ditto.
	(sd8_operand): Ditto.
	(ub4_operand): Ditto.
	(ub8_operand): Ditto.
	(uh4_operand): Ditto.
	(uw4_operand): Ditto.
	(uw5_operand): Ditto.
	(uw6_operand): Ditto.
	(uw8_operand): Ditto.
	(addiur2_operand): Ditto.
	(addiusp_operand): Ditto.
	(andi16_operand): Ditto.
	(movep_src_register): Ditto.
	(movep_src_operand): Ditto.
	(fcc_reload_operand): Ditto.
	(muldiv_target_operand): Ditto.
	(const_vector_same_val_operand): Ditto.
	(const_vector_same_simm5_operand): Ditto.
	(const_vector_same_uimm5_operand): Ditto.
	(const_vector_same_ximm5_operand): Ditto.
	(const_vector_same_uimm6_operand): Ditto.
	(par_const_vector_shf_set_operand): Ditto.
	(reg_or_vector_same_val_operand): Ditto.
	(reg_or_vector_same_simm5_operand): Ditto.
	(reg_or_vector_same_uimm5_operand): Ditto.
	(reg_or_vector_same_ximm5_operand): Ditto.
	(reg_or_vector_same_uimm6_operand): Ditto.
	* doc/md.texi: Ditto.
	* config/loongarch/lsx.md: New file.
---
 gcc/config/loongarch/constraints.md        |  131 +-
 gcc/config/loongarch/loongarch-builtins.cc |   10 +
 gcc/config/loongarch/loongarch-modes.def   |   38 +
 gcc/config/loongarch/loongarch-protos.h    |   31 +
 gcc/config/loongarch/loongarch.cc          | 2223 +++++++++-
 gcc/config/loongarch/loongarch.h           |   65 +-
 gcc/config/loongarch/loongarch.md          |   44 +-
 gcc/config/loongarch/lsx.md                | 4479 ++++++++++++++++++++
 gcc/config/loongarch/predicates.md         |  333 +-
 gcc/doc/md.texi                            |   11 +
 10 files changed, 7182 insertions(+), 183 deletions(-)
 create mode 100644 gcc/config/loongarch/lsx.md

diff --git a/gcc/config/loongarch/constraints.md b/gcc/config/loongarch/constraints.md
index 7a38cd07ae9..39505e45efe 100644
--- a/gcc/config/loongarch/constraints.md
+++ b/gcc/config/loongarch/constraints.md
@@ -76,12 +76,13 @@
 ;;     "Le"
 ;;	 "A signed 32-bit constant can be expressed as Lb + I, but not a
 ;;	  single Lb or I."
-;; "M" <-----unused
-;; "N" <-----unused
-;; "O" <-----unused
-;; "P" <-----unused
+;; "M" "A constant that cannot be loaded using @code{lui}, @code{addiu}
+;;	or @code{ori}."
+;; "N" "A constant in the range -65535 to -1 (inclusive)."
+;; "O" "A signed 15-bit constant."
+;; "P" "A constant in the range 1 to 65535 (inclusive)."
 ;; "Q" <-----unused
-;; "R" <-----unused
+;; "R" "An address that can be used in a non-macro load or store."
 ;; "S" <-----unused
 ;; "T" <-----unused
 ;; "U" <-----unused
@@ -214,6 +215,63 @@ (define_constraint "Le"
   (and (match_code "const_int")
        (match_test "loongarch_addu16i_imm12_operand_p (ival, SImode)")))
 
+(define_constraint "M"
+  "A constant that cannot be loaded using @code{lui}, @code{addiu}
+   or @code{ori}."
+  (and (match_code "const_int")
+       (not (match_test "IMM12_OPERAND (ival)"))
+       (not (match_test "IMM12_OPERAND_UNSIGNED (ival)"))
+       (not (match_test "LU12I_OPERAND (ival)"))))
+
+(define_constraint "N"
+  "A constant in the range -65535 to -1 (inclusive)."
+  (and (match_code "const_int")
+       (match_test "ival >= -0xffff && ival < 0")))
+
+(define_constraint "O"
+  "A signed 15-bit constant."
+  (and (match_code "const_int")
+       (match_test "ival >= -0x4000 && ival < 0x4000")))
+
+(define_constraint "P"
+  "A constant in the range 1 to 65535 (inclusive)."
+  (and (match_code "const_int")
+       (match_test "ival > 0 && ival < 0x10000")))
+
+;; General constraints
+
+(define_memory_constraint "R"
+  "An address that can be used in a non-macro load or store."
+  (and (match_code "mem")
+       (match_test "loongarch_address_insns (XEXP (op, 0), mode, false) == 1")))
+(define_constraint "S"
+  "@internal
+   A constant call address."
+  (and (match_operand 0 "call_insn_operand")
+       (match_test "CONSTANT_P (op)")))
+
+(define_constraint "YG"
+  "@internal
+   A vector zero."
+  (and (match_code "const_vector")
+       (match_test "op == CONST0_RTX (mode)")))
+
+(define_constraint "YA"
+  "@internal
+   An unsigned 6-bit constant."
+  (and (match_code "const_int")
+       (match_test "UIMM6_OPERAND (ival)")))
+
+(define_constraint "YB"
+  "@internal
+   A signed 10-bit constant."
+  (and (match_code "const_int")
+       (match_test "IMM10_OPERAND (ival)")))
+
+(define_constraint "Yb"
+   "@internal"
+   (match_operand 0 "qi_mask_operand"))
+
 (define_constraint "Yd"
   "@internal
    A constant @code{move_operand} that can be safely loaded using
@@ -221,10 +279,73 @@ (define_constraint "Yd"
   (and (match_operand 0 "move_operand")
        (match_test "CONSTANT_P (op)")))
 
+(define_constraint "Yh"
+   "@internal"
+    (match_operand 0 "hi_mask_operand"))
+
+(define_constraint "Yw"
+   "@internal"
+    (match_operand 0 "si_mask_operand"))
+
 (define_constraint "Yx"
    "@internal"
    (match_operand 0 "low_bitmask_operand"))
 
+(define_constraint "YI"
+  "@internal
+   A replicated vector const in which the replicated value is in the range
+   [-512,511]."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_same_int_p (op, mode, -512, 511)")))
+
+(define_constraint "YC"
+  "@internal
+   A replicated vector const in which the replicated value has a single
+   bit set."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_bitimm_set_p (op, mode)")))
+
+(define_constraint "YZ"
+  "@internal
+   A replicated vector const in which the replicated value has a single
+   bit clear."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_bitimm_clr_p (op, mode)")))
+
+(define_constraint "Unv5"
+  "@internal
+   A replicated vector const in which the replicated value is in the range
+   [-31,0]."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_same_int_p (op, mode, -31, 0)")))
+
+(define_constraint "Uuv5"
+  "@internal
+   A replicated vector const in which the replicated value is in the range
+   [0,31]."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_same_int_p (op, mode, 0, 31)")))
+
+(define_constraint "Usv5"
+  "@internal
+   A replicated vector const in which the replicated value is in the range
+   [-16,15]."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_same_int_p (op, mode, -16, 15)")))
+
+(define_constraint "Uuv6"
+  "@internal
+   A replicated vector const in which the replicated value is in the range
+   [0,63]."
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_same_int_p (op, mode, 0, 63)")))
+
+(define_constraint "Urv8"
+  "@internal
+   A replicated vector const with replicated byte values as well as elements"
+  (and (match_code "const_vector")
+       (match_test "loongarch_const_vector_same_bytes_p (op, mode)")))
+
 (define_memory_constraint "ZC"
   "A memory operand whose address is formed by a base register and offset
    that is suitable for use in instructions with the same addressing mode
diff --git a/gcc/config/loongarch/loongarch-builtins.cc b/gcc/config/loongarch/loongarch-builtins.cc
index b929f224dfa..ebe70a986c3 100644
--- a/gcc/config/loongarch/loongarch-builtins.cc
+++ b/gcc/config/loongarch/loongarch-builtins.cc
@@ -36,6 +36,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "fold-const.h"
 #include "expr.h"
 #include "langhooks.h"
+#include "emit-rtl.h"
 
 /* Macros to create an enumeration identifier for a function prototype.  */
 #define LARCH_FTYPE_NAME1(A, B) LARCH_##A##_FTYPE_##B
@@ -297,6 +298,15 @@ loongarch_prepare_builtin_arg (struct expand_operand *op, tree exp,
   create_input_operand (op, value, TYPE_MODE (TREE_TYPE (arg)));
 }
 
+/* Return a const_int vector of VAL with mode MODE.  */
+
+rtx
+loongarch_gen_const_int_vector (machine_mode mode, HOST_WIDE_INT val)
+{
+  rtx c = gen_int_mode (val, GET_MODE_INNER (mode));
+  return gen_const_vec_duplicate (mode, c);
+}
+
 /* Expand instruction ICODE as part of a built-in function sequence.
    Use the first NOPS elements of OPS as the instruction's operands.
    HAS_TARGET_P is true if operand 0 is a target; it is false if the
diff --git a/gcc/config/loongarch/loongarch-modes.def b/gcc/config/loongarch/loongarch-modes.def
index 8082ce993a5..6f57b60525d 100644
--- a/gcc/config/loongarch/loongarch-modes.def
+++ b/gcc/config/loongarch/loongarch-modes.def
@@ -23,3 +23,41 @@ FLOAT_MODE (TF, 16, ieee_quad_format);
 
 /* For floating point conditions in FCC registers.  */
 CC_MODE (FCC);
+
+/* Vector modes.  */
+VECTOR_MODES (INT, 4);	      /* V4QI  V2HI      */
+VECTOR_MODES (INT, 8);	      /* V8QI  V4HI V2SI */
+VECTOR_MODES (FLOAT, 8);      /*       V4HF V2SF */
+
+/* For LARCH LSX 128 bits.  */
+VECTOR_MODES (INT, 16);	      /* V16QI V8HI V4SI V2DI */
+VECTOR_MODES (FLOAT, 16);     /*	    V4SF V2DF */
+
+VECTOR_MODES (INT, 32);	      /* V32QI V16HI V8SI V4DI */
+VECTOR_MODES (FLOAT, 32);     /*	     V8SF V4DF */
+
+/* Double-sized vector modes for vec_concat.  */
+/* VECTOR_MODE (INT, QI, 32);	  V32QI	*/
+/* VECTOR_MODE (INT, HI, 16);	  V16HI	*/
+/* VECTOR_MODE (INT, SI, 8);	  V8SI	*/
+/* VECTOR_MODE (INT, DI, 4);	  V4DI	*/
+/* VECTOR_MODE (FLOAT, SF, 8);	  V8SF	*/
+/* VECTOR_MODE (FLOAT, DF, 4);	  V4DF	*/
+
+VECTOR_MODE (INT, QI, 64);    /* V64QI	*/
+VECTOR_MODE (INT, HI, 32);    /* V32HI	*/
+VECTOR_MODE (INT, SI, 16);    /* V16SI	*/
+VECTOR_MODE (INT, DI, 8);     /* V8DI */
+VECTOR_MODE (FLOAT, SF, 16);  /* V16SF	*/
+VECTOR_MODE (FLOAT, DF, 8);   /* V8DF */
+
+VECTOR_MODES (FRACT, 4);	/* V4QQ  V2HQ */
+VECTOR_MODES (UFRACT, 4);	/* V4UQQ V2UHQ */
+VECTOR_MODES (ACCUM, 4);	/*       V2HA */
+VECTOR_MODES (UACCUM, 4);	/*       V2UHA */
+
+INT_MODE (OI, 32);
+
+/* Keep the OI modes from confusing the compiler into thinking
+   that these modes could actually be used for computation.  They are
+   only holders for vectors during data movement.  */
diff --git a/gcc/config/loongarch/loongarch-protos.h b/gcc/config/loongarch/loongarch-protos.h
index b71b188507a..fc33527cdcf 100644
--- a/gcc/config/loongarch/loongarch-protos.h
+++ b/gcc/config/loongarch/loongarch-protos.h
@@ -85,10 +85,18 @@ extern bool loongarch_split_move_p (rtx, rtx);
 extern void loongarch_split_move (rtx, rtx, rtx);
 extern bool loongarch_addu16i_imm12_operand_p (HOST_WIDE_INT, machine_mode);
 extern void loongarch_split_plus_constant (rtx *, machine_mode);
+extern bool loongarch_split_move_insn_p (rtx, rtx);
+extern void loongarch_split_move_insn (rtx, rtx, rtx);
+extern void loongarch_split_128bit_move (rtx, rtx);
+extern bool loongarch_split_128bit_move_p (rtx, rtx);
+extern void loongarch_split_lsx_copy_d (rtx, rtx, rtx, rtx (*)(rtx, rtx, rtx));
+extern void loongarch_split_lsx_insert_d (rtx, rtx, rtx, rtx);
+extern void loongarch_split_lsx_fill_d (rtx, rtx);
 extern const char *loongarch_output_move (rtx, rtx);
 extern bool loongarch_cfun_has_cprestore_slot_p (void);
 #ifdef RTX_CODE
 extern void loongarch_expand_scc (rtx *);
+extern bool loongarch_expand_vec_cmp (rtx *);
 extern void loongarch_expand_conditional_branch (rtx *);
 extern void loongarch_expand_conditional_move (rtx *);
 extern void loongarch_expand_conditional_trap (rtx);
@@ -110,6 +118,15 @@ extern bool loongarch_small_data_pattern_p (rtx);
 extern rtx loongarch_rewrite_small_data (rtx);
 extern rtx loongarch_return_addr (int, rtx);
 
+extern bool loongarch_const_vector_same_val_p (rtx, machine_mode);
+extern bool loongarch_const_vector_same_bytes_p (rtx, machine_mode);
+extern bool loongarch_const_vector_same_int_p (rtx, machine_mode, HOST_WIDE_INT,
+					  HOST_WIDE_INT);
+extern bool loongarch_const_vector_shuffle_set_p (rtx, machine_mode);
+extern bool loongarch_const_vector_bitimm_set_p (rtx, machine_mode);
+extern bool loongarch_const_vector_bitimm_clr_p (rtx, machine_mode);
+extern rtx loongarch_lsx_vec_parallel_const_half (machine_mode, bool);
+extern rtx loongarch_gen_const_int_vector (machine_mode, HOST_WIDE_INT);
 extern enum reg_class loongarch_secondary_reload_class (enum reg_class,
 							machine_mode,
 							rtx, bool);
@@ -129,6 +146,7 @@ extern const char *loongarch_output_equal_conditional_branch (rtx_insn *,
 							      rtx *,
 							      bool);
 extern const char *loongarch_output_division (const char *, rtx *);
+extern const char *loongarch_lsx_output_division (const char *, rtx *);
 extern const char *loongarch_output_probe_stack_range (rtx, rtx, rtx);
 extern bool loongarch_hard_regno_rename_ok (unsigned int, unsigned int);
 extern int loongarch_dspalu_bypass_p (rtx, rtx);
@@ -156,6 +174,13 @@ union loongarch_gen_fn_ptrs
 extern void loongarch_expand_atomic_qihi (union loongarch_gen_fn_ptrs,
 					  rtx, rtx, rtx, rtx, rtx);
 
+extern void loongarch_expand_vector_init (rtx, rtx);
+extern void loongarch_expand_vec_unpack (rtx op[2], bool, bool);
+extern void loongarch_expand_vec_perm (rtx, rtx, rtx, rtx);
+extern void loongarch_expand_vector_extract (rtx, rtx, int);
+extern void loongarch_expand_vector_reduc (rtx (*)(rtx, rtx, rtx), rtx, rtx);
+
+extern int loongarch_ldst_scaled_shift (machine_mode);
 extern bool loongarch_signed_immediate_p (unsigned HOST_WIDE_INT, int, int);
 extern bool loongarch_unsigned_immediate_p (unsigned HOST_WIDE_INT, int, int);
 extern bool loongarch_12bit_offset_address_p (rtx, machine_mode);
@@ -171,6 +196,9 @@ extern bool loongarch_split_symbol_type (enum loongarch_symbol_type);
 typedef rtx (*mulsidi3_gen_fn) (rtx, rtx, rtx);
 
 extern void loongarch_register_frame_header_opt (void);
+extern void loongarch_expand_vec_cond_expr (machine_mode, machine_mode, rtx *);
+extern void loongarch_expand_vec_cond_mask_expr (machine_mode, machine_mode,
+						 rtx *);
 
 /* Routines implemented in loongarch-c.c.  */
 void loongarch_cpu_cpp_builtins (cpp_reader *);
@@ -180,6 +208,9 @@ extern void loongarch_atomic_assign_expand_fenv (tree *, tree *, tree *);
 extern tree loongarch_builtin_decl (unsigned int, bool);
 extern rtx loongarch_expand_builtin (tree, rtx, rtx subtarget ATTRIBUTE_UNUSED,
 				     machine_mode, int);
+extern tree loongarch_builtin_vectorized_function (unsigned int, tree, tree);
+extern rtx loongarch_gen_const_int_vector_shuffle (machine_mode, int);
 extern tree loongarch_build_builtin_va_list (void);
 
+extern rtx loongarch_build_signbit_mask (machine_mode, bool, bool);
 #endif /* ! GCC_LOONGARCH_PROTOS_H */
diff --git a/gcc/config/loongarch/loongarch.cc b/gcc/config/loongarch/loongarch.cc
index 5b8b93eb24b..9f4a7d7922b 100644
--- a/gcc/config/loongarch/loongarch.cc
+++ b/gcc/config/loongarch/loongarch.cc
@@ -432,7 +432,7 @@ loongarch_flatten_aggregate_argument (const_tree type,
 
 static unsigned
 loongarch_pass_aggregate_num_fpr (const_tree type,
-					loongarch_aggregate_field fields[2])
+				  loongarch_aggregate_field fields[2])
 {
   int n = loongarch_flatten_aggregate_argument (type, fields);
 
@@ -773,7 +773,7 @@ loongarch_setup_incoming_varargs (cumulative_args_t cum,
     {
       rtx ptr = plus_constant (Pmode, virtual_incoming_args_rtx,
 			       REG_PARM_STACK_SPACE (cfun->decl)
-				 - gp_saved * UNITS_PER_WORD);
+			       - gp_saved * UNITS_PER_WORD);
       rtx mem = gen_frame_mem (BLKmode, ptr);
       set_mem_alias_set (mem, get_varargs_alias_set ());
 
@@ -1049,7 +1049,7 @@ rtx
 loongarch_emit_move (rtx dest, rtx src)
 {
   return (can_create_pseudo_p () ? emit_move_insn (dest, src)
-				 : emit_move_insn_1 (dest, src));
+	  : emit_move_insn_1 (dest, src));
 }
 
 /* Save register REG to MEM.  Make the instruction frame-related.  */
@@ -1675,6 +1675,140 @@ loongarch_symbol_binds_local_p (const_rtx x)
     return false;
 }
 
+/* Return true if OP is a constant vector with the number of units in MODE,
+   and each unit has the same bit set.  */
+
+bool
+loongarch_const_vector_bitimm_set_p (rtx op, machine_mode mode)
+{
+  if (GET_CODE (op) == CONST_VECTOR && op != CONST0_RTX (mode))
+    {
+      unsigned HOST_WIDE_INT val = UINTVAL (CONST_VECTOR_ELT (op, 0));
+      int vlog2 = exact_log2 (val & GET_MODE_MASK (GET_MODE_INNER (mode)));
+
+      if (vlog2 != -1)
+	{
+	  gcc_assert (GET_MODE_CLASS (mode) == MODE_VECTOR_INT);
+	  gcc_assert (vlog2 >= 0 && vlog2 <= GET_MODE_UNIT_BITSIZE (mode) - 1);
+	  return loongarch_const_vector_same_val_p (op, mode);
+	}
+    }
+
+  return false;
+}
+
+/* Return true if OP is a constant vector with the number of units in MODE,
+   and each unit has the same bit clear.  */
+
+bool
+loongarch_const_vector_bitimm_clr_p (rtx op, machine_mode mode)
+{
+  if (GET_CODE (op) == CONST_VECTOR && op != CONSTM1_RTX (mode))
+    {
+      unsigned HOST_WIDE_INT val = ~UINTVAL (CONST_VECTOR_ELT (op, 0));
+      int vlog2 = exact_log2 (val & GET_MODE_MASK (GET_MODE_INNER (mode)));
+
+      if (vlog2 != -1)
+	{
+	  gcc_assert (GET_MODE_CLASS (mode) == MODE_VECTOR_INT);
+	  gcc_assert (vlog2 >= 0 && vlog2 <= GET_MODE_UNIT_BITSIZE (mode) - 1);
+	  return loongarch_const_vector_same_val_p (op, mode);
+	}
+    }
+
+  return false;
+}
+
+/* Return true if OP is a constant vector with the number of units in MODE,
+   and each unit has the same value.  */
+
+bool
+loongarch_const_vector_same_val_p (rtx op, machine_mode mode)
+{
+  int i, nunits = GET_MODE_NUNITS (mode);
+  rtx first;
+
+  if (GET_CODE (op) != CONST_VECTOR || GET_MODE (op) != mode)
+    return false;
+
+  first = CONST_VECTOR_ELT (op, 0);
+  for (i = 1; i < nunits; i++)
+    if (!rtx_equal_p (first, CONST_VECTOR_ELT (op, i)))
+      return false;
+
+  return true;
+}
+
+/* Return true if OP is a constant vector with the number of units in MODE,
+   and each unit has the same value as well as replicated bytes in the value.
+*/
+
+bool
+loongarch_const_vector_same_bytes_p (rtx op, machine_mode mode)
+{
+  int i, bytes;
+  HOST_WIDE_INT val, first_byte;
+  rtx first;
+
+  if (!loongarch_const_vector_same_val_p (op, mode))
+    return false;
+
+  first = CONST_VECTOR_ELT (op, 0);
+  bytes = GET_MODE_UNIT_SIZE (mode);
+  val = INTVAL (first);
+  first_byte = val & 0xff;
+  for (i = 1; i < bytes; i++)
+    {
+      val >>= 8;
+      if ((val & 0xff) != first_byte)
+	return false;
+    }
+
+  return true;
+}
+
+/* Return true if OP is a constant vector with the number of units in MODE,
+   and each unit has the same integer value in the range [LOW, HIGH].  */
+
+bool
+loongarch_const_vector_same_int_p (rtx op, machine_mode mode, HOST_WIDE_INT low,
+				   HOST_WIDE_INT high)
+{
+  HOST_WIDE_INT value;
+  rtx elem0;
+
+  if (!loongarch_const_vector_same_val_p (op, mode))
+    return false;
+
+  elem0 = CONST_VECTOR_ELT (op, 0);
+  if (!CONST_INT_P (elem0))
+    return false;
+
+  value = INTVAL (elem0);
+  return (value >= low && value <= high);
+}
+
+/* Return true if OP is a constant vector with repeated 4-element sets
+   in mode MODE.  */
+
+bool
+loongarch_const_vector_shuffle_set_p (rtx op, machine_mode mode)
+{
+  int nunits = GET_MODE_NUNITS (mode);
+  int nsets = nunits / 4;
+  int set = 0;
+  int i, j;
+
+  /* Check if we have the same 4-element sets.  */
+  for (j = 0; j < nsets; j++, set = 4 * j)
+    for (i = 0; i < 4; i++)
+      if ((INTVAL (XVECEXP (op, 0, i))
+	   != (INTVAL (XVECEXP (op, 0, set + i)) - set))
+	  || !IN_RANGE (INTVAL (XVECEXP (op, 0, set + i)), 0, set + 3))
+	return false;
+  return true;
+}
+
 /* Return true if rtx constants of mode MODE should be put into a small
    data section.  */
 
@@ -1792,6 +1926,11 @@ loongarch_symbolic_constant_p (rtx x, enum loongarch_symbol_type *symbol_type)
 static int
 loongarch_symbol_insns (enum loongarch_symbol_type type, machine_mode mode)
 {
+  /* LSX LD.* and ST.* cannot support loading symbols via an immediate
+     operand.  */
+  if (LSX_SUPPORTED_MODE_P (mode))
+    return 0;
+
   switch (type)
     {
     case SYMBOL_GOT_DISP:
@@ -1838,7 +1977,8 @@ loongarch_cannot_force_const_mem (machine_mode mode, rtx x)
      references, reload will consider forcing C into memory and using
      one of the instruction's memory alternatives.  Returning false
      here will force it to use an input reload instead.  */
-  if (CONST_INT_P (x) && loongarch_legitimate_constant_p (mode, x))
+  if ((CONST_INT_P (x) || GET_CODE (x) == CONST_VECTOR)
+      && loongarch_legitimate_constant_p (mode, x))
     return true;
 
   split_const (x, &base, &offset);
@@ -1915,6 +2055,12 @@ loongarch_valid_offset_p (rtx x, machine_mode mode)
       && !IMM12_OPERAND (INTVAL (x) + GET_MODE_SIZE (mode) - UNITS_PER_WORD))
     return false;
 
+  /* LSX LD.* and ST.* supports 10-bit signed offsets.  */
+  if (LSX_SUPPORTED_MODE_P (mode)
+      && !loongarch_signed_immediate_p (INTVAL (x), 10,
+					loongarch_ldst_scaled_shift (mode)))
+    return false;
+
   return true;
 }
 
@@ -1999,7 +2145,7 @@ loongarch_valid_lo_sum_p (enum loongarch_symbol_type symbol_type,
 
 static bool
 loongarch_valid_index_p (struct loongarch_address_info *info, rtx x,
-			  machine_mode mode, bool strict_p)
+			 machine_mode mode, bool strict_p)
 {
   rtx index;
 
@@ -2052,7 +2198,7 @@ loongarch_classify_address (struct loongarch_address_info *info, rtx x,
 	}
 
       if (loongarch_valid_base_register_p (XEXP (x, 1), mode, strict_p)
-	 && loongarch_valid_index_p (info, XEXP (x, 0), mode, strict_p))
+	  && loongarch_valid_index_p (info, XEXP (x, 0), mode, strict_p))
 	{
 	  info->reg = XEXP (x, 1);
 	  return true;
@@ -2127,6 +2273,7 @@ loongarch_address_insns (rtx x, machine_mode mode, bool might_split_p)
 {
   struct loongarch_address_info addr;
   int factor;
+  bool lsx_p = !might_split_p && LSX_SUPPORTED_MODE_P (mode);
 
   if (!loongarch_classify_address (&addr, x, mode, false))
     return 0;
@@ -2144,15 +2291,29 @@ loongarch_address_insns (rtx x, machine_mode mode, bool might_split_p)
     switch (addr.type)
       {
       case ADDRESS_REG:
+	if (lsx_p)
+	  {
+	    /* LSX LD.* and ST.* supports 10-bit signed offsets.  */
+	    if (loongarch_signed_immediate_p (INTVAL (addr.offset), 10,
+					      loongarch_ldst_scaled_shift (mode)))
+	      return 1;
+	    else
+	      return 0;
+	  }
+	return factor;
+
       case ADDRESS_REG_REG:
-      case ADDRESS_CONST_INT:
 	return factor;
 
+      case ADDRESS_CONST_INT:
+	return lsx_p ? 0 : factor;
+
       case ADDRESS_LO_SUM:
 	return factor + 1;
 
       case ADDRESS_SYMBOLIC:
-	return factor * loongarch_symbol_insns (addr.symbol_type, mode);
+	return lsx_p ? 0
+	  : factor * loongarch_symbol_insns (addr.symbol_type, mode);
       }
   return 0;
 }
@@ -2178,6 +2339,19 @@ loongarch_signed_immediate_p (unsigned HOST_WIDE_INT x, int bits,
   return loongarch_unsigned_immediate_p (x, bits, shift);
 }
 
+/* Return the scale shift that applied to LSX LD/ST address offset.  */
+
+int
+loongarch_ldst_scaled_shift (machine_mode mode)
+{
+  int shift = exact_log2 (GET_MODE_UNIT_SIZE (mode));
+
+  if (shift < 0 || shift > 8)
+    gcc_unreachable ();
+
+  return shift;
+}
+
 /* Return true if X is a legitimate address with a 12-bit offset
    or addr.type is ADDRESS_LO_SUM.
    MODE is the mode of the value being accessed.  */
@@ -2245,6 +2419,9 @@ loongarch_const_insns (rtx x)
       return loongarch_integer_cost (INTVAL (x));
 
     case CONST_VECTOR:
+      if (LSX_SUPPORTED_MODE_P (GET_MODE (x))
+	  && loongarch_const_vector_same_int_p (x, GET_MODE (x), -512, 511))
+	return 1;
       /* Fall through.  */
     case CONST_DOUBLE:
       return x == CONST0_RTX (GET_MODE (x)) ? 1 : 0;
@@ -2279,7 +2456,7 @@ loongarch_const_insns (rtx x)
     case SYMBOL_REF:
     case LABEL_REF:
       return loongarch_symbol_insns (
-	loongarch_classify_symbol (x), MAX_MACHINE_MODE);
+		loongarch_classify_symbol (x), MAX_MACHINE_MODE);
 
     default:
       return 0;
@@ -2301,7 +2478,26 @@ loongarch_split_const_insns (rtx x)
   return low + high;
 }
 
-static bool loongarch_split_move_insn_p (rtx dest, rtx src);
+bool loongarch_split_move_insn_p (rtx dest, rtx src);
+/* Return one word of 128-bit value OP, taking into account the fixed
+   endianness of certain registers.  BYTE selects from the byte address.  */
+
+rtx
+loongarch_subword_at_byte (rtx op, unsigned int byte)
+{
+  machine_mode mode;
+
+  mode = GET_MODE (op);
+  if (mode == VOIDmode)
+    mode = TImode;
+
+  gcc_assert (!FP_REG_RTX_P (op));
+
+  if (MEM_P (op))
+    return loongarch_rewrite_small_data (adjust_address (op, word_mode, byte));
+
+  return simplify_gen_subreg (word_mode, op, mode, byte);
+}
 
 /* Return the number of instructions needed to implement INSN,
    given that it loads from or stores to MEM.  */
@@ -3062,9 +3258,10 @@ loongarch_legitimize_move (machine_mode mode, rtx dest, rtx src)
 
   /* Both src and dest are non-registers;  one special case is supported where
      the source is (const_int 0) and the store can source the zero register.
-     */
+     LSX is never able to source the zero register directly in
+     memory operations.  */
   if (!register_operand (dest, mode) && !register_operand (src, mode)
-      && !const_0_operand (src, mode))
+      && (!const_0_operand (src, mode) || LSX_SUPPORTED_MODE_P (mode)))
     {
       loongarch_emit_move (dest, force_reg (mode, src));
       return true;
@@ -3636,6 +3833,54 @@ loongarch_rtx_costs (rtx x, machine_mode mode, int outer_code,
     }
 }
 
+/* Vectorizer cost model implementation.  */
+
+/* Implement targetm.vectorize.builtin_vectorization_cost.  */
+
+static int
+loongarch_builtin_vectorization_cost (enum vect_cost_for_stmt type_of_cost,
+				      tree vectype,
+				      int misalign ATTRIBUTE_UNUSED)
+{
+  unsigned elements;
+
+  switch (type_of_cost)
+    {
+      case scalar_stmt:
+      case scalar_load:
+      case vector_stmt:
+      case vector_load:
+      case vec_to_scalar:
+      case scalar_to_vec:
+      case cond_branch_not_taken:
+      case vec_promote_demote:
+      case scalar_store:
+      case vector_store:
+	return 1;
+
+      case vec_perm:
+	return 1;
+
+      case unaligned_load:
+      case vector_gather_load:
+	return 2;
+
+      case unaligned_store:
+      case vector_scatter_store:
+	return 10;
+
+      case cond_branch_taken:
+	return 3;
+
+      case vec_construct:
+	elements = TYPE_VECTOR_SUBPARTS (vectype);
+	return elements / 2 + 1;
+
+      default:
+	gcc_unreachable ();
+    }
+}
+
 /* Implement TARGET_ADDRESS_COST.  */
 
 static int
@@ -3690,6 +3935,11 @@ loongarch_split_move_p (rtx dest, rtx src)
       if (FP_REG_RTX_P (src) && MEM_P (dest))
 	return false;
     }
+
+  /* Check if LSX moves need splitting.  */
+  if (LSX_SUPPORTED_MODE_P (GET_MODE (dest)))
+    return loongarch_split_128bit_move_p (dest, src);
+
   /* Otherwise split all multiword moves.  */
   return size > UNITS_PER_WORD;
 }
@@ -3703,7 +3953,9 @@ loongarch_split_move (rtx dest, rtx src, rtx insn_)
   rtx low_dest;
 
   gcc_checking_assert (loongarch_split_move_p (dest, src));
-  if (FP_REG_RTX_P (dest) || FP_REG_RTX_P (src))
+  if (LSX_SUPPORTED_MODE_P (GET_MODE (dest)))
+    loongarch_split_128bit_move (dest, src);
+  else if (FP_REG_RTX_P (dest) || FP_REG_RTX_P (src))
     {
       if (!TARGET_64BIT && GET_MODE (dest) == DImode)
 	emit_insn (gen_move_doubleword_fprdi (dest, src));
@@ -3807,12 +4059,21 @@ loongarch_split_plus_constant (rtx *op, machine_mode mode)
 
 /* Return true if a move from SRC to DEST in INSN should be split.  */
 
-static bool
+bool
 loongarch_split_move_insn_p (rtx dest, rtx src)
 {
   return loongarch_split_move_p (dest, src);
 }
 
+/* Split a move from SRC to DEST in INSN, given that
+   loongarch_split_move_insn_p holds.  */
+
+void
+loongarch_split_move_insn (rtx dest, rtx src, rtx insn)
+{
+  loongarch_split_move (dest, src, insn);
+}
+
 /* Implement TARGET_CONSTANT_ALIGNMENT.  */
 
 static HOST_WIDE_INT
@@ -3859,7 +4120,7 @@ const char *
 loongarch_output_move_index_float (rtx x, machine_mode mode, bool ldr)
 {
   int index = exact_log2 (GET_MODE_SIZE (mode));
-  if (!IN_RANGE (index, 2, 3))
+  if (!IN_RANGE (index, 2, 4))
     return NULL;
 
   struct loongarch_address_info info;
@@ -3868,20 +4129,216 @@ loongarch_output_move_index_float (rtx x, machine_mode mode, bool ldr)
       || !loongarch_legitimate_address_p (mode, x, false))
     return NULL;
 
-  const char *const insn[][2] =
+  const char *const insn[][3] =
     {
 	{
 	  "fstx.s\t%1,%0",
-	  "fstx.d\t%1,%0"
+	  "fstx.d\t%1,%0",
+	  "vstx\t%w1,%0"
 	},
 	{
 	  "fldx.s\t%0,%1",
-	  "fldx.d\t%0,%1"
-	},
+	  "fldx.d\t%0,%1",
+	  "vldx\t%w0,%1"
+	}
     };
 
   return insn[ldr][index-2];
 }
+/* Return true if a 128-bit move from SRC to DEST should be split.  */
+
+bool
+loongarch_split_128bit_move_p (rtx dest, rtx src)
+{
+  /* LSX-to-LSX moves can be done in a single instruction.  */
+  if (FP_REG_RTX_P (src) && FP_REG_RTX_P (dest))
+    return false;
+
+  /* Check for LSX loads and stores.  */
+  if (FP_REG_RTX_P (dest) && MEM_P (src))
+    return false;
+  if (FP_REG_RTX_P (src) && MEM_P (dest))
+    return false;
+
+  /* Check for LSX set to an immediate const vector with valid replicated
+     element.  */
+  if (FP_REG_RTX_P (dest)
+      && loongarch_const_vector_same_int_p (src, GET_MODE (src), -512, 511))
+    return false;
+
+  /* Check for LSX load zero immediate.  */
+  if (FP_REG_RTX_P (dest) && src == CONST0_RTX (GET_MODE (src)))
+    return false;
+
+  return true;
+}
+
+/* Split a 128-bit move from SRC to DEST.  */
+
+void
+loongarch_split_128bit_move (rtx dest, rtx src)
+{
+  int byte, index;
+  rtx low_dest, low_src, d, s;
+
+  if (FP_REG_RTX_P (dest))
+    {
+      gcc_assert (!MEM_P (src));
+
+      rtx new_dest = dest;
+      if (!TARGET_64BIT)
+	{
+	  if (GET_MODE (dest) != V4SImode)
+	    new_dest = simplify_gen_subreg (V4SImode, dest, GET_MODE (dest), 0);
+	}
+      else
+	{
+	  if (GET_MODE (dest) != V2DImode)
+	    new_dest = simplify_gen_subreg (V2DImode, dest, GET_MODE (dest), 0);
+	}
+
+      for (byte = 0, index = 0; byte < GET_MODE_SIZE (TImode);
+	   byte += UNITS_PER_WORD, index++)
+	{
+	  s = loongarch_subword_at_byte (src, byte);
+	  if (!TARGET_64BIT)
+	    emit_insn (gen_lsx_vinsgr2vr_w (new_dest, s, new_dest,
+					    GEN_INT (1 << index)));
+	  else
+	    emit_insn (gen_lsx_vinsgr2vr_d (new_dest, s, new_dest,
+					    GEN_INT (1 << index)));
+	}
+    }
+  else if (FP_REG_RTX_P (src))
+    {
+      gcc_assert (!MEM_P (dest));
+
+      rtx new_src = src;
+      if (!TARGET_64BIT)
+	{
+	  if (GET_MODE (src) != V4SImode)
+	    new_src = simplify_gen_subreg (V4SImode, src, GET_MODE (src), 0);
+	}
+      else
+	{
+	  if (GET_MODE (src) != V2DImode)
+	    new_src = simplify_gen_subreg (V2DImode, src, GET_MODE (src), 0);
+	}
+
+      for (byte = 0, index = 0; byte < GET_MODE_SIZE (TImode);
+	   byte += UNITS_PER_WORD, index++)
+	{
+	  d = loongarch_subword_at_byte (dest, byte);
+	  if (!TARGET_64BIT)
+	    emit_insn (gen_lsx_vpickve2gr_w (d, new_src, GEN_INT (index)));
+	  else
+	    emit_insn (gen_lsx_vpickve2gr_d (d, new_src, GEN_INT (index)));
+	}
+    }
+  else
+    {
+      low_dest = loongarch_subword_at_byte (dest, 0);
+      low_src = loongarch_subword_at_byte (src, 0);
+      gcc_assert (REG_P (low_dest) && REG_P (low_src));
+      /* Make sure the source register is not written before reading.  */
+      if (REGNO (low_dest) <= REGNO (low_src))
+	{
+	  for (byte = 0; byte < GET_MODE_SIZE (TImode);
+	       byte += UNITS_PER_WORD)
+	    {
+	      d = loongarch_subword_at_byte (dest, byte);
+	      s = loongarch_subword_at_byte (src, byte);
+	      loongarch_emit_move (d, s);
+	    }
+	}
+      else
+	{
+	  for (byte = GET_MODE_SIZE (TImode) - UNITS_PER_WORD; byte >= 0;
+	       byte -= UNITS_PER_WORD)
+	    {
+	      d = loongarch_subword_at_byte (dest, byte);
+	      s = loongarch_subword_at_byte (src, byte);
+	      loongarch_emit_move (d, s);
+	    }
+	}
+    }
+}
+
+
+/* Split a COPY_S.D with operands DEST, SRC and INDEX.  GEN is a function
+   used to generate subregs.  */
+
+void
+loongarch_split_lsx_copy_d (rtx dest, rtx src, rtx index,
+			    rtx (*gen_fn)(rtx, rtx, rtx))
+{
+  gcc_assert ((GET_MODE (src) == V2DImode && GET_MODE (dest) == DImode)
+	      || (GET_MODE (src) == V2DFmode && GET_MODE (dest) == DFmode));
+
+  /* Note that low is always from the lower index, and high is always
+     from the higher index.  */
+  rtx low = loongarch_subword (dest, false);
+  rtx high = loongarch_subword (dest, true);
+  rtx new_src = simplify_gen_subreg (V4SImode, src, GET_MODE (src), 0);
+
+  emit_insn (gen_fn (low, new_src, GEN_INT (INTVAL (index) * 2)));
+  emit_insn (gen_fn (high, new_src, GEN_INT (INTVAL (index) * 2 + 1)));
+}
+
+/* Split a INSERT.D with operand DEST, SRC1.INDEX and SRC2.  */
+
+void
+loongarch_split_lsx_insert_d (rtx dest, rtx src1, rtx index, rtx src2)
+{
+  int i;
+  gcc_assert (GET_MODE (dest) == GET_MODE (src1));
+  gcc_assert ((GET_MODE (dest) == V2DImode
+	       && (GET_MODE (src2) == DImode || src2 == const0_rtx))
+	      || (GET_MODE (dest) == V2DFmode && GET_MODE (src2) == DFmode));
+
+  /* Note that low is always from the lower index, and high is always
+     from the higher index.  */
+  rtx low = loongarch_subword (src2, false);
+  rtx high = loongarch_subword (src2, true);
+  rtx new_dest = simplify_gen_subreg (V4SImode, dest, GET_MODE (dest), 0);
+  rtx new_src1 = simplify_gen_subreg (V4SImode, src1, GET_MODE (src1), 0);
+  i = exact_log2 (INTVAL (index));
+  gcc_assert (i != -1);
+
+  emit_insn (gen_lsx_vinsgr2vr_w (new_dest, low, new_src1,
+				  GEN_INT (1 << (i * 2))));
+  emit_insn (gen_lsx_vinsgr2vr_w (new_dest, high, new_dest,
+				  GEN_INT (1 << (i * 2 + 1))));
+}
+
+/* Split FILL.D.  */
+
+void
+loongarch_split_lsx_fill_d (rtx dest, rtx src)
+{
+  gcc_assert ((GET_MODE (dest) == V2DImode
+	       && (GET_MODE (src) == DImode || src == const0_rtx))
+	      || (GET_MODE (dest) == V2DFmode && GET_MODE (src) == DFmode));
+
+  /* Note that low is always from the lower index, and high is always
+     from the higher index.  */
+  rtx low, high;
+  if (src == const0_rtx)
+    {
+      low = src;
+      high = src;
+    }
+  else
+    {
+      low = loongarch_subword (src, false);
+      high = loongarch_subword (src, true);
+    }
+  rtx new_dest = simplify_gen_subreg (V4SImode, dest, GET_MODE (dest), 0);
+  emit_insn (gen_lsx_vreplgr2vr_w (new_dest, low));
+  emit_insn (gen_lsx_vinsgr2vr_w (new_dest, high, new_dest, GEN_INT (1 << 1)));
+  emit_insn (gen_lsx_vinsgr2vr_w (new_dest, high, new_dest, GEN_INT (1 << 3)));
+}
+
 
 /* Return the appropriate instructions to move SRC into DEST.  Assume
    that SRC is operand 1 and DEST is operand 0.  */
@@ -3893,10 +4350,25 @@ loongarch_output_move (rtx dest, rtx src)
   enum rtx_code src_code = GET_CODE (src);
   machine_mode mode = GET_MODE (dest);
   bool dbl_p = (GET_MODE_SIZE (mode) == 8);
+  bool lsx_p = LSX_SUPPORTED_MODE_P (mode);
 
   if (loongarch_split_move_p (dest, src))
     return "#";
 
+  if ((lsx_p)
+      && dest_code == REG && FP_REG_P (REGNO (dest))
+      && src_code == CONST_VECTOR
+      && CONST_INT_P (CONST_VECTOR_ELT (src, 0)))
+    {
+      gcc_assert (loongarch_const_vector_same_int_p (src, mode, -512, 511));
+      switch (GET_MODE_SIZE (mode))
+	{
+	case 16:
+	  return "vrepli.%v0\t%w0,%E1";
+	default: gcc_unreachable ();
+	}
+    }
+
   if ((src_code == REG && GP_REG_P (REGNO (src)))
       || (src == CONST0_RTX (mode)))
     {
@@ -3906,7 +4378,21 @@ loongarch_output_move (rtx dest, rtx src)
 	    return "or\t%0,%z1,$r0";
 
 	  if (FP_REG_P (REGNO (dest)))
-	    return dbl_p ? "movgr2fr.d\t%0,%z1" : "movgr2fr.w\t%0,%z1";
+	    {
+	      if (lsx_p)
+		{
+		  gcc_assert (src == CONST0_RTX (GET_MODE (src)));
+		  switch (GET_MODE_SIZE (mode))
+		    {
+		    case 16:
+		      return "vrepli.b\t%w0,0";
+		    default:
+		      gcc_unreachable ();
+		    }
+		}
+
+	      return dbl_p ? "movgr2fr.d\t%0,%z1" : "movgr2fr.w\t%0,%z1";
+	    }
 	}
       if (dest_code == MEM)
 	{
@@ -3948,7 +4434,10 @@ loongarch_output_move (rtx dest, rtx src)
     {
       if (src_code == REG)
 	if (FP_REG_P (REGNO (src)))
-	  return dbl_p ? "movfr2gr.d\t%0,%1" : "movfr2gr.s\t%0,%1";
+	  {
+	    gcc_assert (!lsx_p);
+	    return dbl_p ? "movfr2gr.d\t%0,%1" : "movfr2gr.s\t%0,%1";
+	  }
 
       if (src_code == MEM)
 	{
@@ -3993,7 +4482,7 @@ loongarch_output_move (rtx dest, rtx src)
 	  enum loongarch_symbol_type type = SYMBOL_PCREL;
 
 	  if (UNSPEC_ADDRESS_P (x))
-	     type = UNSPEC_ADDRESS_TYPE (x);
+	    type = UNSPEC_ADDRESS_TYPE (x);
 
 	  if (type == SYMBOL_TLS_LE)
 	    return "lu12i.w\t%0,%h1";
@@ -4028,7 +4517,20 @@ loongarch_output_move (rtx dest, rtx src)
   if (src_code == REG && FP_REG_P (REGNO (src)))
     {
       if (dest_code == REG && FP_REG_P (REGNO (dest)))
-	return dbl_p ? "fmov.d\t%0,%1" : "fmov.s\t%0,%1";
+	{
+	  if (lsx_p)
+	    {
+	      switch (GET_MODE_SIZE (mode))
+		{
+		case 16:
+		  return "vori.b\t%w0,%w1,0";
+		default:
+		  gcc_unreachable ();
+		}
+	    }
+
+	  return dbl_p ? "fmov.d\t%0,%1" : "fmov.s\t%0,%1";
+	}
 
       if (dest_code == MEM)
 	{
@@ -4039,6 +4541,17 @@ loongarch_output_move (rtx dest, rtx src)
 	  if (insn)
 	    return insn;
 
+	  if (lsx_p)
+	    {
+	      switch (GET_MODE_SIZE (mode))
+		{
+		case 16:
+		  return "vst\t%w1,%0";
+		default:
+		  gcc_unreachable ();
+		}
+	    }
+
 	  return dbl_p ? "fst.d\t%1,%0" : "fst.s\t%1,%0";
 	}
     }
@@ -4054,6 +4567,16 @@ loongarch_output_move (rtx dest, rtx src)
 	  if (insn)
 	    return insn;
 
+	  if (lsx_p)
+	    {
+	      switch (GET_MODE_SIZE (mode))
+		{
+		case 16:
+		  return "vld\t%w0,%1";
+		default:
+		  gcc_unreachable ();
+		}
+	    }
 	  return dbl_p ? "fld.d\t%0,%1" : "fld.s\t%0,%1";
 	}
     }
@@ -4243,6 +4766,7 @@ loongarch_extend_comparands (rtx_code code, rtx *op0, rtx *op1)
     }
 }
 
+
 /* Convert a comparison into something that can be used in a branch.  On
    entry, *OP0 and *OP1 are the values being compared and *CODE is the code
    used to compare them.  Update them to describe the final comparison.  */
@@ -5002,9 +5526,12 @@ loongarch_print_operand_reloc (FILE *file, rtx op, bool hi64_part,
 
    'A'	Print a _DB suffix if the memory model requires a release.
    'b'	Print the address of a memory operand, without offset.
+   'B'	Print CONST_INT OP element 0 of a replicated CONST_VECTOR
+	  as an unsigned byte [0..255].
    'c'  Print an integer.
    'C'	Print the integer branch condition for comparison OP.
    'd'	Print CONST_INT OP in decimal.
+   'E'	Print CONST_INT OP element 0 of a replicated CONST_VECTOR in decimal.
    'F'	Print the FPU branch condition for comparison OP.
    'G'	Print a DBAR insn if the memory model requires a release.
    'H'  Print address 52-61bit relocation associated with OP.
@@ -5020,13 +5547,16 @@ loongarch_print_operand_reloc (FILE *file, rtx op, bool hi64_part,
    't'	Like 'T', but with the EQ/NE cases reversed
    'V'	Print exact log2 of CONST_INT OP element 0 of a replicated
 	  CONST_VECTOR in decimal.
+   'v'	Print the insn size suffix b, h, w or d for vector modes V16QI, V8HI,
+	  V4SI, V2SI, and w, d for vector modes V4SF, V2DF respectively.
    'W'	Print the inverse of the FPU branch condition for comparison OP.
+   'w'	Print a LSX register.
    'X'	Print CONST_INT OP in hexadecimal format.
    'x'	Print the low 16 bits of CONST_INT OP in hexadecimal format.
    'Y'	Print loongarch_fp_conditions[INTVAL (OP)]
    'y'	Print exact log2 of CONST_INT OP in decimal.
    'Z'	Print OP and a comma for 8CC, otherwise print nothing.
-   'z'	Print $0 if OP is zero, otherwise print OP normally.  */
+   'z'	Print $r0 if OP is zero, otherwise print OP normally.  */
 
 static void
 loongarch_print_operand (FILE *file, rtx op, int letter)
@@ -5048,6 +5578,18 @@ loongarch_print_operand (FILE *file, rtx op, int letter)
       if (loongarch_memmodel_needs_rel_acq_fence ((enum memmodel) INTVAL (op)))
        fputs ("_db", file);
       break;
+    case 'E':
+      if (GET_CODE (op) == CONST_VECTOR)
+	{
+	  gcc_assert (loongarch_const_vector_same_val_p (op, GET_MODE (op)));
+	  op = CONST_VECTOR_ELT (op, 0);
+	  gcc_assert (CONST_INT_P (op));
+	  fprintf (file, HOST_WIDE_INT_PRINT_DEC, INTVAL (op));
+	}
+      else
+	output_operand_lossage ("invalid use of '%%%c'", letter);
+      break;
+
 
     case 'c':
       if (CONST_INT_P (op))
@@ -5098,6 +5640,18 @@ loongarch_print_operand (FILE *file, rtx op, int letter)
       loongarch_print_operand_reloc (file, op, false /* hi64_part*/,
 				     false /* lo_reloc */);
       break;
+    case 'B':
+      if (GET_CODE (op) == CONST_VECTOR)
+	{
+	  gcc_assert (loongarch_const_vector_same_val_p (op, GET_MODE (op)));
+	  op = CONST_VECTOR_ELT (op, 0);
+	  gcc_assert (CONST_INT_P (op));
+	  unsigned HOST_WIDE_INT val8 = UINTVAL (op) & GET_MODE_MASK (QImode);
+	  fprintf (file, HOST_WIDE_INT_PRINT_UNSIGNED, val8);
+	}
+      else
+	output_operand_lossage ("invalid use of '%%%c'", letter);
+      break;
 
     case 'm':
       if (CONST_INT_P (op))
@@ -5144,10 +5698,45 @@ loongarch_print_operand (FILE *file, rtx op, int letter)
 	output_operand_lossage ("invalid use of '%%%c'", letter);
       break;
 
-    case 'W':
-      loongarch_print_float_branch_condition (file, reverse_condition (code),
-					      letter);
-      break;
+    case 'v':
+      switch (GET_MODE (op))
+	{
+	case E_V16QImode:
+	case E_V32QImode:
+	  fprintf (file, "b");
+	  break;
+	case E_V8HImode:
+	case E_V16HImode:
+	  fprintf (file, "h");
+	  break;
+	case E_V4SImode:
+	case E_V4SFmode:
+	case E_V8SImode:
+	case E_V8SFmode:
+	  fprintf (file, "w");
+	  break;
+	case E_V2DImode:
+	case E_V2DFmode:
+	case E_V4DImode:
+	case E_V4DFmode:
+	  fprintf (file, "d");
+	  break;
+	default:
+	  output_operand_lossage ("invalid use of '%%%c'", letter);
+	}
+      break;
+
+    case 'W':
+      loongarch_print_float_branch_condition (file, reverse_condition (code),
+					      letter);
+      break;
+
+    case 'w':
+      if (code == REG && LSX_REG_P (REGNO (op)))
+	fprintf (file, "$vr%s", &reg_names[REGNO (op)][2]);
+      else
+	output_operand_lossage ("invalid use of '%%%c'", letter);
+      break;
 
     case 'x':
       if (CONST_INT_P (op))
@@ -5520,9 +6109,13 @@ loongarch_hard_regno_mode_ok_uncached (unsigned int regno, machine_mode mode)
   size = GET_MODE_SIZE (mode);
   mclass = GET_MODE_CLASS (mode);
 
-  if (GP_REG_P (regno))
+  if (GP_REG_P (regno) && !LSX_SUPPORTED_MODE_P (mode))
     return ((regno - GP_REG_FIRST) & 1) == 0 || size <= UNITS_PER_WORD;
 
+  /* For LSX, allow TImode and 128-bit vector modes in all FPR.  */
+  if (FP_REG_P (regno) && LSX_SUPPORTED_MODE_P (mode))
+    return true;
+
   if (FP_REG_P (regno))
     {
       if (mclass == MODE_FLOAT
@@ -5549,6 +6142,17 @@ loongarch_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
   return loongarch_hard_regno_mode_ok_p[mode][regno];
 }
 
+
+static bool
+loongarch_hard_regno_call_part_clobbered (unsigned int,
+					  unsigned int regno, machine_mode mode)
+{
+  if (ISA_HAS_LSX && FP_REG_P (regno) && GET_MODE_SIZE (mode) > 8)
+    return true;
+
+  return false;
+}
+
 /* Implement TARGET_HARD_REGNO_NREGS.  */
 
 static unsigned int
@@ -5560,7 +6164,12 @@ loongarch_hard_regno_nregs (unsigned int regno, machine_mode mode)
     return (GET_MODE_SIZE (mode) + 3) / 4;
 
   if (FP_REG_P (regno))
-    return (GET_MODE_SIZE (mode) + UNITS_PER_FPREG - 1) / UNITS_PER_FPREG;
+    {
+      if (LSX_SUPPORTED_MODE_P (mode))
+	return 1;
+
+      return (GET_MODE_SIZE (mode) + UNITS_PER_FPREG - 1) / UNITS_PER_FPREG;
+    }
 
   /* All other registers are word-sized.  */
   return (GET_MODE_SIZE (mode) + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
@@ -5587,8 +6196,12 @@ loongarch_class_max_nregs (enum reg_class rclass, machine_mode mode)
   if (hard_reg_set_intersect_p (left, reg_class_contents[(int) FP_REGS]))
     {
       if (loongarch_hard_regno_mode_ok (FP_REG_FIRST, mode))
-	size = MIN (size, UNITS_PER_FPREG);
-
+	{
+	  if (LSX_SUPPORTED_MODE_P (mode))
+	    size = MIN (size, UNITS_PER_LSX_REG);
+	  else
+	    size = MIN (size, UNITS_PER_FPREG);
+	}
       left &= ~reg_class_contents[FP_REGS];
     }
   if (!hard_reg_set_empty_p (left))
@@ -5599,9 +6212,13 @@ loongarch_class_max_nregs (enum reg_class rclass, machine_mode mode)
 /* Implement TARGET_CAN_CHANGE_MODE_CLASS.  */
 
 static bool
-loongarch_can_change_mode_class (machine_mode, machine_mode,
+loongarch_can_change_mode_class (machine_mode from, machine_mode to,
 				 reg_class_t rclass)
 {
+  /* Allow conversions between different LSX vector modes.  */
+  if (LSX_SUPPORTED_MODE_P (from) && LSX_SUPPORTED_MODE_P (to))
+    return true;
+
   return !reg_classes_intersect_p (FP_REGS, rclass);
 }
 
@@ -5621,7 +6238,7 @@ loongarch_mode_ok_for_mov_fmt_p (machine_mode mode)
       return TARGET_HARD_FLOAT && TARGET_DOUBLE_FLOAT;
 
     default:
-      return 0;
+      return LSX_SUPPORTED_MODE_P (mode);
     }
 }
 
@@ -5778,7 +6395,12 @@ loongarch_secondary_reload (bool in_p ATTRIBUTE_UNUSED, rtx x,
       if (regno < 0
 	  || (MEM_P (x)
 	      && (GET_MODE_SIZE (mode) == 4 || GET_MODE_SIZE (mode) == 8)))
-	/* In this case we can use fld.s, fst.s, fld.d or fst.d.  */
+	/* In this case we can use lwc1, swc1, ldc1 or sdc1.  We'll use
+	   pairs of lwc1s and swc1s if ldc1 and sdc1 are not supported.  */
+	return NO_REGS;
+
+      if (MEM_P (x) && LSX_SUPPORTED_MODE_P (mode))
+	/* In this case we can use LSX LD.* and ST.*.  */
 	return NO_REGS;
 
       if (GP_REG_P (regno) || x == CONST0_RTX (mode))
@@ -5813,6 +6435,14 @@ loongarch_valid_pointer_mode (scalar_int_mode mode)
   return mode == SImode || (TARGET_64BIT && mode == DImode);
 }
 
+/* Implement TARGET_VECTOR_MODE_SUPPORTED_P.  */
+
+static bool
+loongarch_vector_mode_supported_p (machine_mode mode)
+{
+  return LSX_SUPPORTED_MODE_P (mode);
+}
+
 /* Implement TARGET_SCALAR_MODE_SUPPORTED_P.  */
 
 static bool
@@ -5825,6 +6455,48 @@ loongarch_scalar_mode_supported_p (scalar_mode mode)
   return default_scalar_mode_supported_p (mode);
 }
 
+/* Implement TARGET_VECTORIZE_PREFERRED_SIMD_MODE.  */
+
+static machine_mode
+loongarch_preferred_simd_mode (scalar_mode mode)
+{
+  if (!ISA_HAS_LSX)
+    return word_mode;
+
+  switch (mode)
+    {
+    case E_QImode:
+      return E_V16QImode;
+    case E_HImode:
+      return E_V8HImode;
+    case E_SImode:
+      return E_V4SImode;
+    case E_DImode:
+      return E_V2DImode;
+
+    case E_SFmode:
+      return E_V4SFmode;
+
+    case E_DFmode:
+      return E_V2DFmode;
+
+    default:
+      break;
+    }
+  return word_mode;
+}
+
+static unsigned int
+loongarch_autovectorize_vector_modes (vector_modes *modes, bool)
+{
+  if (ISA_HAS_LSX)
+    {
+      modes->safe_push (V16QImode);
+    }
+
+  return 0;
+}
+
 /* Return the assembly code for INSN, which has the operands given by
    OPERANDS, and which branches to OPERANDS[0] if some condition is true.
    BRANCH_IF_TRUE is the asm template that should be used if OPERANDS[0]
@@ -5989,6 +6661,29 @@ loongarch_output_division (const char *division, rtx *operands)
   return s;
 }
 
+/* Return the assembly code for LSX DIV_{S,U}.DF or MOD_{S,U}.DF instructions,
+   which has the operands given by OPERANDS.  Add in a divide-by-zero check
+   if needed.  */
+
+const char *
+loongarch_lsx_output_division (const char *division, rtx *operands)
+{
+  const char *s;
+
+  s = division;
+  if (TARGET_CHECK_ZERO_DIV)
+    {
+      if (ISA_HAS_LSX)
+	{
+	  output_asm_insn ("vsetallnez.%v0\t$fcc7,%w2",operands);
+	  output_asm_insn (s, operands);
+	  output_asm_insn ("bcnez\t$fcc7,1f", operands);
+	}
+      s = "break\t7\n1:";
+    }
+  return s;
+}
+
 /* Implement TARGET_SCHED_ADJUST_COST.  We assume that anti and output
    dependencies have no cost.  */
 
@@ -6258,6 +6953,9 @@ loongarch_option_override_internal (struct gcc_options *opts)
   if (TARGET_DIRECT_EXTERN_ACCESS && flag_shlib)
     error ("%qs cannot be used for compiling a shared library",
 	   "-mdirect-extern-access");
+  if (loongarch_vector_access_cost == 0)
+    loongarch_vector_access_cost = 5;
+
 
   switch (la_target.cmodel)
     {
@@ -6476,64 +7174,60 @@ loongarch_trampoline_init (rtx m_tramp, tree fndecl, rtx chain_value)
   emit_insn (gen_clear_cache (addr, end_addr));
 }
 
-/* Implement HARD_REGNO_CALLER_SAVE_MODE.  */
-
-machine_mode
-loongarch_hard_regno_caller_save_mode (unsigned int regno, unsigned int nregs,
-				       machine_mode mode)
-{
-  /* For performance, avoid saving/restoring upper parts of a register
-     by returning MODE as save mode when the mode is known.  */
-  if (mode == VOIDmode)
-    return choose_hard_reg_mode (regno, nregs, NULL);
-  else
-    return mode;
-}
+/* Generate or test for an insn that supports a constant permutation.  */
 
-/* Implement TARGET_SPILL_CLASS.  */
+#define MAX_VECT_LEN 32
 
-static reg_class_t
-loongarch_spill_class (reg_class_t rclass ATTRIBUTE_UNUSED,
-		       machine_mode mode ATTRIBUTE_UNUSED)
+struct expand_vec_perm_d
 {
-  return NO_REGS;
-}
-
-/* Implement TARGET_PROMOTE_FUNCTION_MODE.  */
+  rtx target, op0, op1;
+  unsigned char perm[MAX_VECT_LEN];
+  machine_mode vmode;
+  unsigned char nelt;
+  bool one_vector_p;
+  bool testing_p;
+};
 
-/* This function is equivalent to default_promote_function_mode_always_promote
-   except that it returns a promoted mode even if type is NULL_TREE.  This is
-   needed by libcalls which have no type (only a mode) such as fixed conversion
-   routines that take a signed or unsigned char/short argument and convert it
-   to a fixed type.  */
+/* Construct (set target (vec_select op0 (parallel perm))) and
+   return true if that's a valid instruction in the active ISA.  */
 
-static machine_mode
-loongarch_promote_function_mode (const_tree type ATTRIBUTE_UNUSED,
-				 machine_mode mode,
-				 int *punsignedp ATTRIBUTE_UNUSED,
-				 const_tree fntype ATTRIBUTE_UNUSED,
-				 int for_return ATTRIBUTE_UNUSED)
+static bool
+loongarch_expand_vselect (rtx target, rtx op0,
+			  const unsigned char *perm, unsigned nelt)
 {
-  int unsignedp;
+  rtx rperm[MAX_VECT_LEN], x;
+  rtx_insn *insn;
+  unsigned i;
 
-  if (type != NULL_TREE)
-    return promote_mode (type, mode, punsignedp);
+  for (i = 0; i < nelt; ++i)
+    rperm[i] = GEN_INT (perm[i]);
 
-  unsignedp = *punsignedp;
-  PROMOTE_MODE (mode, unsignedp, type);
-  *punsignedp = unsignedp;
-  return mode;
+  x = gen_rtx_PARALLEL (VOIDmode, gen_rtvec_v (nelt, rperm));
+  x = gen_rtx_VEC_SELECT (GET_MODE (target), op0, x);
+  x = gen_rtx_SET (target, x);
+
+  insn = emit_insn (x);
+  if (recog_memoized (insn) < 0)
+    {
+      remove_insn (insn);
+      return false;
+    }
+  return true;
 }
 
-/* Implement TARGET_STARTING_FRAME_OFFSET.  See loongarch_compute_frame_info
-   for details about the frame layout.  */
+/* Similar, but generate a vec_concat from op0 and op1 as well.  */
 
-static HOST_WIDE_INT
-loongarch_starting_frame_offset (void)
+static bool
+loongarch_expand_vselect_vconcat (rtx target, rtx op0, rtx op1,
+				  const unsigned char *perm, unsigned nelt)
 {
-  if (FRAME_GROWS_DOWNWARD)
-    return 0;
-  return crtl->outgoing_args_size;
+  machine_mode v2mode;
+  rtx x;
+
+  if (!GET_MODE_2XWIDER_MODE (GET_MODE (op0)).exists (&v2mode))
+    return false;
+  x = gen_rtx_VEC_CONCAT (v2mode, op0, op1);
+  return loongarch_expand_vselect (target, x, perm, nelt);
 }
 
 static tree
@@ -6796,106 +7490,1289 @@ loongarch_set_handled_components (sbitmap components)
 #define TARGET_ASM_ALIGNED_SI_OP "\t.word\t"
 #undef TARGET_ASM_ALIGNED_DI_OP
 #define TARGET_ASM_ALIGNED_DI_OP "\t.dword\t"
+/* Construct (set target (vec_select op0 (parallel selector))) and
+   return true if that's a valid instruction in the active ISA.  */
 
-#undef TARGET_OPTION_OVERRIDE
-#define TARGET_OPTION_OVERRIDE loongarch_option_override
-
-#undef TARGET_LEGITIMIZE_ADDRESS
-#define TARGET_LEGITIMIZE_ADDRESS loongarch_legitimize_address
+static bool
+loongarch_expand_lsx_shuffle (struct expand_vec_perm_d *d)
+{
+  rtx x, elts[MAX_VECT_LEN];
+  rtvec v;
+  rtx_insn *insn;
+  unsigned i;
 
-#undef TARGET_ASM_SELECT_RTX_SECTION
-#define TARGET_ASM_SELECT_RTX_SECTION loongarch_select_rtx_section
-#undef TARGET_ASM_FUNCTION_RODATA_SECTION
-#define TARGET_ASM_FUNCTION_RODATA_SECTION loongarch_function_rodata_section
+  if (!ISA_HAS_LSX)
+    return false;
 
-#undef TARGET_SCHED_INIT
-#define TARGET_SCHED_INIT loongarch_sched_init
-#undef TARGET_SCHED_REORDER
-#define TARGET_SCHED_REORDER loongarch_sched_reorder
-#undef TARGET_SCHED_REORDER2
-#define TARGET_SCHED_REORDER2 loongarch_sched_reorder2
-#undef TARGET_SCHED_VARIABLE_ISSUE
-#define TARGET_SCHED_VARIABLE_ISSUE loongarch_variable_issue
-#undef TARGET_SCHED_ADJUST_COST
-#define TARGET_SCHED_ADJUST_COST loongarch_adjust_cost
-#undef TARGET_SCHED_ISSUE_RATE
-#define TARGET_SCHED_ISSUE_RATE loongarch_issue_rate
-#undef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD
-#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD \
-  loongarch_multipass_dfa_lookahead
+  for (i = 0; i < d->nelt; i++)
+    elts[i] = GEN_INT (d->perm[i]);
 
-#undef TARGET_FUNCTION_OK_FOR_SIBCALL
-#define TARGET_FUNCTION_OK_FOR_SIBCALL loongarch_function_ok_for_sibcall
+  v = gen_rtvec_v (d->nelt, elts);
+  x = gen_rtx_PARALLEL (VOIDmode, v);
 
-#undef TARGET_VALID_POINTER_MODE
-#define TARGET_VALID_POINTER_MODE loongarch_valid_pointer_mode
-#undef TARGET_REGISTER_MOVE_COST
-#define TARGET_REGISTER_MOVE_COST loongarch_register_move_cost
-#undef TARGET_MEMORY_MOVE_COST
-#define TARGET_MEMORY_MOVE_COST loongarch_memory_move_cost
-#undef TARGET_RTX_COSTS
-#define TARGET_RTX_COSTS loongarch_rtx_costs
-#undef TARGET_ADDRESS_COST
-#define TARGET_ADDRESS_COST loongarch_address_cost
+  if (!loongarch_const_vector_shuffle_set_p (x, d->vmode))
+    return false;
 
-#undef TARGET_IN_SMALL_DATA_P
-#define TARGET_IN_SMALL_DATA_P loongarch_in_small_data_p
+  x = gen_rtx_VEC_SELECT (d->vmode, d->op0, x);
+  x = gen_rtx_SET (d->target, x);
 
-#undef TARGET_PREFERRED_RELOAD_CLASS
-#define TARGET_PREFERRED_RELOAD_CLASS loongarch_preferred_reload_class
+  insn = emit_insn (x);
+  if (recog_memoized (insn) < 0)
+    {
+      remove_insn (insn);
+      return false;
+    }
+  return true;
+}
 
-#undef TARGET_ASM_FILE_START_FILE_DIRECTIVE
-#define TARGET_ASM_FILE_START_FILE_DIRECTIVE true
+void
+loongarch_expand_vec_perm (rtx target, rtx op0, rtx op1, rtx sel)
+{
+  machine_mode vmode = GET_MODE (target);
 
-#undef TARGET_EXPAND_BUILTIN_VA_START
-#define TARGET_EXPAND_BUILTIN_VA_START loongarch_va_start
+  gcc_checking_assert (vmode == E_V16QImode
+      || vmode == E_V2DImode || vmode == E_V2DFmode
+      || vmode == E_V4SImode || vmode == E_V4SFmode
+      || vmode == E_V8HImode);
+  gcc_checking_assert (GET_MODE (op0) == vmode);
+  gcc_checking_assert (GET_MODE (op1) == vmode);
+  gcc_checking_assert (GET_MODE (sel) == vmode);
+  gcc_checking_assert (ISA_HAS_LSX);
 
-#undef TARGET_PROMOTE_FUNCTION_MODE
-#define TARGET_PROMOTE_FUNCTION_MODE loongarch_promote_function_mode
-#undef TARGET_RETURN_IN_MEMORY
-#define TARGET_RETURN_IN_MEMORY loongarch_return_in_memory
+  switch (vmode)
+    {
+    case E_V16QImode:
+      emit_insn (gen_lsx_vshuf_b (target, op1, op0, sel));
+      break;
+    case E_V2DFmode:
+      emit_insn (gen_lsx_vshuf_d_f (target, sel, op1, op0));
+      break;
+    case E_V2DImode:
+      emit_insn (gen_lsx_vshuf_d (target, sel, op1, op0));
+      break;
+    case E_V4SFmode:
+      emit_insn (gen_lsx_vshuf_w_f (target, sel, op1, op0));
+      break;
+    case E_V4SImode:
+      emit_insn (gen_lsx_vshuf_w (target, sel, op1, op0));
+      break;
+    case E_V8HImode:
+      emit_insn (gen_lsx_vshuf_h (target, sel, op1, op0));
+      break;
+    default:
+      break;
+    }
+}
 
-#undef TARGET_FUNCTION_VALUE
-#define TARGET_FUNCTION_VALUE loongarch_function_value
-#undef TARGET_LIBCALL_VALUE
-#define TARGET_LIBCALL_VALUE loongarch_libcall_value
+static bool
+loongarch_try_expand_lsx_vshuf_const (struct expand_vec_perm_d *d)
+{
+  int i;
+  rtx target, op0, op1, sel, tmp;
+  rtx rperm[MAX_VECT_LEN];
 
-#undef TARGET_ASM_OUTPUT_MI_THUNK
-#define TARGET_ASM_OUTPUT_MI_THUNK loongarch_output_mi_thunk
-#undef TARGET_ASM_CAN_OUTPUT_MI_THUNK
-#define TARGET_ASM_CAN_OUTPUT_MI_THUNK \
-  hook_bool_const_tree_hwi_hwi_const_tree_true
+  if (d->vmode == E_V2DImode || d->vmode == E_V2DFmode
+	|| d->vmode == E_V4SImode || d->vmode == E_V4SFmode
+	|| d->vmode == E_V8HImode || d->vmode == E_V16QImode)
+    {
+      target = d->target;
+      op0 = d->op0;
+      op1 = d->one_vector_p ? d->op0 : d->op1;
 
-#undef TARGET_PRINT_OPERAND
-#define TARGET_PRINT_OPERAND loongarch_print_operand
-#undef TARGET_PRINT_OPERAND_ADDRESS
-#define TARGET_PRINT_OPERAND_ADDRESS loongarch_print_operand_address
-#undef TARGET_PRINT_OPERAND_PUNCT_VALID_P
-#define TARGET_PRINT_OPERAND_PUNCT_VALID_P \
-  loongarch_print_operand_punct_valid_p
+      if (GET_MODE (op0) != GET_MODE (op1)
+	  || GET_MODE (op0) != GET_MODE (target))
+	return false;
 
-#undef TARGET_SETUP_INCOMING_VARARGS
-#define TARGET_SETUP_INCOMING_VARARGS loongarch_setup_incoming_varargs
-#undef TARGET_STRICT_ARGUMENT_NAMING
-#define TARGET_STRICT_ARGUMENT_NAMING hook_bool_CUMULATIVE_ARGS_true
-#undef TARGET_MUST_PASS_IN_STACK
-#define TARGET_MUST_PASS_IN_STACK must_pass_in_stack_var_size
-#undef TARGET_PASS_BY_REFERENCE
-#define TARGET_PASS_BY_REFERENCE loongarch_pass_by_reference
-#undef TARGET_ARG_PARTIAL_BYTES
-#define TARGET_ARG_PARTIAL_BYTES loongarch_arg_partial_bytes
-#undef TARGET_FUNCTION_ARG
-#define TARGET_FUNCTION_ARG loongarch_function_arg
-#undef TARGET_FUNCTION_ARG_ADVANCE
-#define TARGET_FUNCTION_ARG_ADVANCE loongarch_function_arg_advance
-#undef TARGET_FUNCTION_ARG_BOUNDARY
-#define TARGET_FUNCTION_ARG_BOUNDARY loongarch_function_arg_boundary
+      if (d->testing_p)
+	return true;
 
-#undef TARGET_SCALAR_MODE_SUPPORTED_P
-#define TARGET_SCALAR_MODE_SUPPORTED_P loongarch_scalar_mode_supported_p
+      for (i = 0; i < d->nelt; i += 1)
+	{
+	  rperm[i] = GEN_INT (d->perm[i]);
+	}
 
-#undef TARGET_INIT_BUILTINS
-#define TARGET_INIT_BUILTINS loongarch_init_builtins
+      if (d->vmode == E_V2DFmode)
+	{
+	  sel = gen_rtx_CONST_VECTOR (E_V2DImode, gen_rtvec_v (d->nelt, rperm));
+	  tmp = gen_rtx_SUBREG (E_V2DImode, d->target, 0);
+	  emit_move_insn (tmp, sel);
+	}
+      else if (d->vmode == E_V4SFmode)
+	{
+	  sel = gen_rtx_CONST_VECTOR (E_V4SImode, gen_rtvec_v (d->nelt, rperm));
+	  tmp = gen_rtx_SUBREG (E_V4SImode, d->target, 0);
+	  emit_move_insn (tmp, sel);
+	}
+      else
+	{
+	  sel = gen_rtx_CONST_VECTOR (d->vmode, gen_rtvec_v (d->nelt, rperm));
+	  emit_move_insn (d->target, sel);
+	}
+
+      switch (d->vmode)
+	{
+	case E_V2DFmode:
+	  emit_insn (gen_lsx_vshuf_d_f (target, target, op1, op0));
+	  break;
+	case E_V2DImode:
+	  emit_insn (gen_lsx_vshuf_d (target, target, op1, op0));
+	  break;
+	case E_V4SFmode:
+	  emit_insn (gen_lsx_vshuf_w_f (target, target, op1, op0));
+	  break;
+	case E_V4SImode:
+	  emit_insn (gen_lsx_vshuf_w (target, target, op1, op0));
+	  break;
+	case E_V8HImode:
+	  emit_insn (gen_lsx_vshuf_h (target, target, op1, op0));
+	  break;
+	case E_V16QImode:
+	  emit_insn (gen_lsx_vshuf_b (target, op1, op0, target));
+	  break;
+	default:
+	  break;
+	}
+
+      return true;
+    }
+  return false;
+}
+
+static bool
+loongarch_expand_vec_perm_const_1 (struct expand_vec_perm_d *d)
+{
+  unsigned int i, nelt = d->nelt;
+  unsigned char perm2[MAX_VECT_LEN];
+
+  if (d->one_vector_p)
+    {
+      /* Try interleave with alternating operands.  */
+      memcpy (perm2, d->perm, sizeof (perm2));
+      for (i = 1; i < nelt; i += 2)
+	perm2[i] += nelt;
+      if (loongarch_expand_vselect_vconcat (d->target, d->op0, d->op1, perm2,
+					    nelt))
+	return true;
+    }
+  else
+    {
+      if (loongarch_expand_vselect_vconcat (d->target, d->op0, d->op1,
+					    d->perm, nelt))
+	return true;
+
+      /* Try again with swapped operands.  */
+      for (i = 0; i < nelt; ++i)
+	perm2[i] = (d->perm[i] + nelt) & (2 * nelt - 1);
+      if (loongarch_expand_vselect_vconcat (d->target, d->op1, d->op0, perm2,
+					    nelt))
+	return true;
+    }
+
+  if (loongarch_expand_lsx_shuffle (d))
+    return true;
+  return false;
+}
+
+/* Implementation of constant vector permuatation.  This function identifies
+ * recognized pattern of permuation selector argument, and use one or more
+ * instruction(s) to finish the permutation job correctly.  For unsupported
+ * patterns, it will return false.  */
+
+static bool
+loongarch_expand_vec_perm_const_2 (struct expand_vec_perm_d *d)
+{
+  /* Although we have the LSX vec_perm<mode> template, there's still some
+     128bit vector permuatation operations send to vectorize_vec_perm_const.
+     In this case, we just simpliy wrap them by single vshuf.* instruction,
+     because LSX vshuf.* instruction just have the same behavior that GCC
+     expects.  */
+  return loongarch_try_expand_lsx_vshuf_const (d);
+}
+
+/* Implement TARGET_VECTORIZE_VEC_PERM_CONST.  */
+
+static bool
+loongarch_vectorize_vec_perm_const (machine_mode vmode, machine_mode op_mode,
+				    rtx target, rtx op0, rtx op1,
+				    const vec_perm_indices &sel)
+{
+  if (vmode != op_mode)
+    return false;
+
+  struct expand_vec_perm_d d;
+  int i, nelt, which;
+  unsigned char orig_perm[MAX_VECT_LEN];
+  bool ok;
+
+  d.target = target;
+  if (op0)
+    {
+      rtx nop0 = force_reg (vmode, op0);
+      if (op0 == op1)
+	op1 = nop0;
+      op0 = nop0;
+    }
+  if (op1)
+    op1 = force_reg (vmode, op1);
+  d.op0 = op0;
+  d.op1 = op1;
+
+  d.vmode = vmode;
+  gcc_assert (VECTOR_MODE_P (vmode));
+  d.nelt = nelt = GET_MODE_NUNITS (vmode);
+  d.testing_p = !target;
+
+  /* This is overly conservative, but ensures we don't get an
+     uninitialized warning on ORIG_PERM.  */
+  memset (orig_perm, 0, MAX_VECT_LEN);
+  for (i = which = 0; i < nelt; ++i)
+    {
+      int ei = sel[i] & (2 * nelt - 1);
+      which |= (ei < nelt ? 1 : 2);
+      orig_perm[i] = ei;
+    }
+  memcpy (d.perm, orig_perm, MAX_VECT_LEN);
+
+  switch (which)
+    {
+    default:
+      gcc_unreachable ();
+
+    case 3:
+      d.one_vector_p = false;
+      if (d.testing_p || !rtx_equal_p (d.op0, d.op1))
+	break;
+      /* FALLTHRU */
+
+    case 2:
+      for (i = 0; i < nelt; ++i)
+	d.perm[i] &= nelt - 1;
+      d.op0 = d.op1;
+      d.one_vector_p = true;
+      break;
+
+    case 1:
+      d.op1 = d.op0;
+      d.one_vector_p = true;
+      break;
+    }
+
+  if (d.testing_p)
+    {
+      d.target = gen_raw_REG (d.vmode, LAST_VIRTUAL_REGISTER + 1);
+      d.op1 = d.op0 = gen_raw_REG (d.vmode, LAST_VIRTUAL_REGISTER + 2);
+      if (!d.one_vector_p)
+	d.op1 = gen_raw_REG (d.vmode, LAST_VIRTUAL_REGISTER + 3);
+
+      ok = loongarch_expand_vec_perm_const_2 (&d);
+      if (ok)
+	return ok;
+
+      start_sequence ();
+      ok = loongarch_expand_vec_perm_const_1 (&d);
+      end_sequence ();
+      return ok;
+    }
+
+  ok = loongarch_expand_vec_perm_const_2 (&d);
+  if (!ok)
+    ok = loongarch_expand_vec_perm_const_1 (&d);
+
+  /* If we were given a two-vector permutation which just happened to
+     have both input vectors equal, we folded this into a one-vector
+     permutation.  There are several loongson patterns that are matched
+     via direct vec_select+vec_concat expansion, but we do not have
+     support in loongarch_expand_vec_perm_const_1 to guess the adjustment
+     that should be made for a single operand.  Just try again with
+     the original permutation.  */
+  if (!ok && which == 3)
+    {
+      d.op0 = op0;
+      d.op1 = op1;
+      d.one_vector_p = false;
+      memcpy (d.perm, orig_perm, MAX_VECT_LEN);
+      ok = loongarch_expand_vec_perm_const_1 (&d);
+    }
+
+  return ok;
+}
+
+/* Implement TARGET_SCHED_REASSOCIATION_WIDTH.  */
+
+static int
+loongarch_sched_reassociation_width (unsigned int opc, machine_mode mode)
+{
+  switch (LARCH_ACTUAL_TUNE)
+    {
+    case CPU_LOONGARCH64:
+    case CPU_LA464:
+      /* Vector part.  */
+      if (LSX_SUPPORTED_MODE_P (mode))
+	{
+	  /* Integer vector instructions execute in FP unit.
+	     The width of integer/float-point vector instructions is 3.  */
+	  return 3;
+	}
+
+      /* Scalar part.  */
+      else if (INTEGRAL_MODE_P (mode))
+	return 1;
+      else if (FLOAT_MODE_P (mode))
+	{
+	  if (opc == PLUS_EXPR)
+	    {
+	      return 2;
+	    }
+	  return 4;
+	}
+      break;
+    default:
+      break;
+    }
+  return 1;
+}
+
+/* Implement extract a scalar element from vecotr register */
+
+void
+loongarch_expand_vector_extract (rtx target, rtx vec, int elt)
+{
+  machine_mode mode = GET_MODE (vec);
+  machine_mode inner_mode = GET_MODE_INNER (mode);
+  rtx tmp;
+
+  switch (mode)
+    {
+    case E_V8HImode:
+    case E_V16QImode:
+      break;
+
+    default:
+      break;
+    }
+
+  tmp = gen_rtx_PARALLEL (VOIDmode, gen_rtvec (1, GEN_INT (elt)));
+  tmp = gen_rtx_VEC_SELECT (inner_mode, vec, tmp);
+
+  /* Let the rtl optimizers know about the zero extension performed.  */
+  if (inner_mode == QImode || inner_mode == HImode)
+    {
+      tmp = gen_rtx_ZERO_EXTEND (SImode, tmp);
+      target = gen_lowpart (SImode, target);
+    }
+  if (inner_mode == SImode || inner_mode == DImode)
+    {
+      tmp = gen_rtx_SIGN_EXTEND (inner_mode, tmp);
+    }
+
+  emit_insn (gen_rtx_SET (target, tmp));
+}
+
+/* Generate code to copy vector bits i / 2 ... i - 1 from vector SRC
+   to bits 0 ... i / 2 - 1 of vector DEST, which has the same mode.
+   The upper bits of DEST are undefined, though they shouldn't cause
+   exceptions (some bits from src or all zeros are ok).  */
+
+static void
+emit_reduc_half (rtx dest, rtx src, int i)
+{
+  rtx tem, d = dest;
+  switch (GET_MODE (src))
+    {
+    case E_V4SFmode:
+      tem = gen_lsx_vbsrl_w_f (dest, src, GEN_INT (i == 128 ? 8 : 4));
+      break;
+    case E_V2DFmode:
+      tem = gen_lsx_vbsrl_d_f (dest, src, GEN_INT (8));
+      break;
+    case E_V16QImode:
+    case E_V8HImode:
+    case E_V4SImode:
+    case E_V2DImode:
+      d = gen_reg_rtx (V2DImode);
+      tem = gen_lsx_vbsrl_d (d, gen_lowpart (V2DImode, src), GEN_INT (i/16));
+      break;
+    default:
+      gcc_unreachable ();
+    }
+  emit_insn (tem);
+  if (d != dest)
+    emit_move_insn (dest, gen_lowpart (GET_MODE (dest), d));
+}
+
+/* Expand a vector reduction.  FN is the binary pattern to reduce;
+   DEST is the destination; IN is the input vector.  */
+
+void
+loongarch_expand_vector_reduc (rtx (*fn) (rtx, rtx, rtx), rtx dest, rtx in)
+{
+  rtx half, dst, vec = in;
+  machine_mode mode = GET_MODE (in);
+  int i;
+
+  for (i = GET_MODE_BITSIZE (mode);
+       i > GET_MODE_UNIT_BITSIZE (mode);
+       i >>= 1)
+    {
+      half = gen_reg_rtx (mode);
+      emit_reduc_half (half, vec, i);
+      if (i == GET_MODE_UNIT_BITSIZE (mode) * 2)
+	dst = dest;
+      else
+	dst = gen_reg_rtx (mode);
+      emit_insn (fn (dst, half, vec));
+      vec = dst;
+    }
+}
+
+/* Expand an integral vector unpack operation.  */
+
+void
+loongarch_expand_vec_unpack (rtx operands[2], bool unsigned_p, bool high_p)
+{
+  machine_mode imode = GET_MODE (operands[1]);
+  rtx (*unpack) (rtx, rtx, rtx);
+  rtx (*cmpFunc) (rtx, rtx, rtx);
+  rtx tmp, dest;
+
+  if (ISA_HAS_LSX)
+    {
+      switch (imode)
+	{
+	case E_V4SImode:
+	  if (high_p != 0)
+	    unpack = gen_lsx_vilvh_w;
+	  else
+	    unpack = gen_lsx_vilvl_w;
+
+	  cmpFunc = gen_lsx_vslt_w;
+	  break;
+
+	case E_V8HImode:
+	  if (high_p != 0)
+	    unpack = gen_lsx_vilvh_h;
+	  else
+	    unpack = gen_lsx_vilvl_h;
+
+	  cmpFunc = gen_lsx_vslt_h;
+	  break;
+
+	case E_V16QImode:
+	  if (high_p != 0)
+	    unpack = gen_lsx_vilvh_b;
+	  else
+	    unpack = gen_lsx_vilvl_b;
+
+	  cmpFunc = gen_lsx_vslt_b;
+	  break;
+
+	default:
+	  gcc_unreachable ();
+	  break;
+	}
+
+      if (!unsigned_p)
+	{
+	  /* Extract sign extention for each element comparing each element
+	     with immediate zero.  */
+	  tmp = gen_reg_rtx (imode);
+	  emit_insn (cmpFunc (tmp, operands[1], CONST0_RTX (imode)));
+	}
+      else
+	tmp = force_reg (imode, CONST0_RTX (imode));
+
+      dest = gen_reg_rtx (imode);
+
+      emit_insn (unpack (dest, operands[1], tmp));
+      emit_move_insn (operands[0], gen_lowpart (GET_MODE (operands[0]), dest));
+      return;
+    }
+  gcc_unreachable ();
+}
+
+/* Construct and return PARALLEL RTX with CONST_INTs for HIGH (high_p == TRUE)
+   or LOW (high_p == FALSE) half of a vector for mode MODE.  */
+
+rtx
+loongarch_lsx_vec_parallel_const_half (machine_mode mode, bool high_p)
+{
+  int nunits = GET_MODE_NUNITS (mode);
+  rtvec v = rtvec_alloc (nunits / 2);
+  int base;
+  int i;
+
+  base = high_p ? nunits / 2 : 0;
+
+  for (i = 0; i < nunits / 2; i++)
+    RTVEC_ELT (v, i) = GEN_INT (base + i);
+
+  return gen_rtx_PARALLEL (VOIDmode, v);
+}
+
+/* A subroutine of loongarch_expand_vec_init, match constant vector
+   elements.  */
+
+static inline bool
+loongarch_constant_elt_p (rtx x)
+{
+  return CONST_INT_P (x) || GET_CODE (x) == CONST_DOUBLE;
+}
+
+rtx
+loongarch_gen_const_int_vector_shuffle (machine_mode mode, int val)
+{
+  int nunits = GET_MODE_NUNITS (mode);
+  int nsets = nunits / 4;
+  rtx elts[MAX_VECT_LEN];
+  int set = 0;
+  int i, j;
+
+  /* Generate a const_int vector replicating the same 4-element set
+     from an immediate.  */
+  for (j = 0; j < nsets; j++, set = 4 * j)
+    for (i = 0; i < 4; i++)
+      elts[set + i] = GEN_INT (set + ((val >> (2 * i)) & 0x3));
+
+  return gen_rtx_PARALLEL (VOIDmode, gen_rtvec_v (nunits, elts));
+}
+
+/* Expand a vector initialization.  */
+
+void
+loongarch_expand_vector_init (rtx target, rtx vals)
+{
+  machine_mode vmode = GET_MODE (target);
+  machine_mode imode = GET_MODE_INNER (vmode);
+  unsigned i, nelt = GET_MODE_NUNITS (vmode);
+  unsigned nvar = 0;
+  bool all_same = true;
+  rtx x;
+
+  for (i = 0; i < nelt; ++i)
+    {
+      x = XVECEXP (vals, 0, i);
+      if (!loongarch_constant_elt_p (x))
+	nvar++;
+      if (i > 0 && !rtx_equal_p (x, XVECEXP (vals, 0, 0)))
+	all_same = false;
+    }
+
+  if (ISA_HAS_LSX)
+    {
+      if (all_same)
+	{
+	  rtx same = XVECEXP (vals, 0, 0);
+	  rtx temp, temp2;
+
+	  if (CONST_INT_P (same) && nvar == 0
+	      && loongarch_signed_immediate_p (INTVAL (same), 10, 0))
+	    {
+	      switch (vmode)
+		{
+		case E_V16QImode:
+		case E_V8HImode:
+		case E_V4SImode:
+		case E_V2DImode:
+		  temp = gen_rtx_CONST_VECTOR (vmode, XVEC (vals, 0));
+		  emit_move_insn (target, temp);
+		  return;
+
+		default:
+		  gcc_unreachable ();
+		}
+	    }
+	  temp = gen_reg_rtx (imode);
+	  if (imode == GET_MODE (same))
+	    temp2 = same;
+	  else if (GET_MODE_SIZE (imode) >= UNITS_PER_WORD)
+	    {
+	      if (GET_CODE (same) == MEM)
+		{
+		  rtx reg_tmp = gen_reg_rtx (GET_MODE (same));
+		  loongarch_emit_move (reg_tmp, same);
+		  temp2 = simplify_gen_subreg (imode, reg_tmp,
+					       GET_MODE (reg_tmp), 0);
+		}
+	      else
+		temp2 = simplify_gen_subreg (imode, same, GET_MODE (same), 0);
+	    }
+	  else
+	    {
+	      if (GET_CODE (same) == MEM)
+		{
+		  rtx reg_tmp = gen_reg_rtx (GET_MODE (same));
+		  loongarch_emit_move (reg_tmp, same);
+		  temp2 = lowpart_subreg (imode, reg_tmp, GET_MODE (reg_tmp));
+		}
+	      else
+		temp2 = lowpart_subreg (imode, same, GET_MODE (same));
+	    }
+	  emit_move_insn (temp, temp2);
+
+	  switch (vmode)
+	    {
+	    case E_V16QImode:
+	    case E_V8HImode:
+	    case E_V4SImode:
+	    case E_V2DImode:
+	      loongarch_emit_move (target, gen_rtx_VEC_DUPLICATE (vmode, temp));
+	      break;
+
+	    case E_V4SFmode:
+	      emit_insn (gen_lsx_vreplvei_w_f_scalar (target, temp));
+	      break;
+
+	    case E_V2DFmode:
+	      emit_insn (gen_lsx_vreplvei_d_f_scalar (target, temp));
+	      break;
+
+	    default:
+	      gcc_unreachable ();
+	    }
+	}
+      else
+	{
+	  emit_move_insn (target, CONST0_RTX (vmode));
+
+	  for (i = 0; i < nelt; ++i)
+	    {
+	      rtx temp = gen_reg_rtx (imode);
+	      emit_move_insn (temp, XVECEXP (vals, 0, i));
+	      switch (vmode)
+		{
+		case E_V16QImode:
+		  if (i == 0)
+		    emit_insn (gen_lsx_vreplvei_b_scalar (target, temp));
+		  else
+		    emit_insn (gen_vec_setv16qi (target, temp, GEN_INT (i)));
+		  break;
+
+		case E_V8HImode:
+		  if (i == 0)
+		    emit_insn (gen_lsx_vreplvei_h_scalar (target, temp));
+		  else
+		    emit_insn (gen_vec_setv8hi (target, temp, GEN_INT (i)));
+		  break;
+
+		case E_V4SImode:
+		  if (i == 0)
+		    emit_insn (gen_lsx_vreplvei_w_scalar (target, temp));
+		  else
+		    emit_insn (gen_vec_setv4si (target, temp, GEN_INT (i)));
+		  break;
+
+		case E_V2DImode:
+		  if (i == 0)
+		    emit_insn (gen_lsx_vreplvei_d_scalar (target, temp));
+		  else
+		    emit_insn (gen_vec_setv2di (target, temp, GEN_INT (i)));
+		  break;
+
+		case E_V4SFmode:
+		  if (i == 0)
+		    emit_insn (gen_lsx_vreplvei_w_f_scalar (target, temp));
+		  else
+		    emit_insn (gen_vec_setv4sf (target, temp, GEN_INT (i)));
+		  break;
+
+		case E_V2DFmode:
+		  if (i == 0)
+		    emit_insn (gen_lsx_vreplvei_d_f_scalar (target, temp));
+		  else
+		    emit_insn (gen_vec_setv2df (target, temp, GEN_INT (i)));
+		  break;
+
+		default:
+		  gcc_unreachable ();
+		}
+	    }
+	}
+      return;
+    }
+
+  /* Load constants from the pool, or whatever's handy.  */
+  if (nvar == 0)
+    {
+      emit_move_insn (target, gen_rtx_CONST_VECTOR (vmode, XVEC (vals, 0)));
+      return;
+    }
+
+  /* For two-part initialization, always use CONCAT.  */
+  if (nelt == 2)
+    {
+      rtx op0 = force_reg (imode, XVECEXP (vals, 0, 0));
+      rtx op1 = force_reg (imode, XVECEXP (vals, 0, 1));
+      x = gen_rtx_VEC_CONCAT (vmode, op0, op1);
+      emit_insn (gen_rtx_SET (target, x));
+      return;
+    }
+
+  /* Loongson is the only cpu with vectors with more elements.  */
+  gcc_assert (0);
+}
+
+/* Implement HARD_REGNO_CALLER_SAVE_MODE.  */
+
+machine_mode
+loongarch_hard_regno_caller_save_mode (unsigned int regno, unsigned int nregs,
+				       machine_mode mode)
+{
+  /* For performance, avoid saving/restoring upper parts of a register
+     by returning MODE as save mode when the mode is known.  */
+  if (mode == VOIDmode)
+    return choose_hard_reg_mode (regno, nregs, NULL);
+  else
+    return mode;
+}
+
+/* Generate RTL for comparing CMP_OP0 and CMP_OP1 using condition COND and
+   store the result -1 or 0 in DEST.  */
+
+static void
+loongarch_expand_lsx_cmp (rtx dest, enum rtx_code cond, rtx op0, rtx op1)
+{
+  machine_mode cmp_mode = GET_MODE (op0);
+  int unspec = -1;
+  bool negate = false;
+
+  switch (cmp_mode)
+    {
+    case E_V16QImode:
+    case E_V32QImode:
+    case E_V8HImode:
+    case E_V16HImode:
+    case E_V4SImode:
+    case E_V8SImode:
+    case E_V2DImode:
+    case E_V4DImode:
+      switch (cond)
+	{
+	case NE:
+	  cond = reverse_condition (cond);
+	  negate = true;
+	  break;
+	case EQ:
+	case LT:
+	case LE:
+	case LTU:
+	case LEU:
+	  break;
+	case GE:
+	case GT:
+	case GEU:
+	case GTU:
+	  std::swap (op0, op1);
+	  cond = swap_condition (cond);
+	  break;
+	default:
+	  gcc_unreachable ();
+	}
+      loongarch_emit_binary (cond, dest, op0, op1);
+      if (negate)
+	emit_move_insn (dest, gen_rtx_NOT (GET_MODE (dest), dest));
+      break;
+
+    case E_V4SFmode:
+    case E_V2DFmode:
+      switch (cond)
+	{
+	case UNORDERED:
+	case ORDERED:
+	case EQ:
+	case NE:
+	case UNEQ:
+	case UNLE:
+	case UNLT:
+	  break;
+	case LTGT: cond = NE; break;
+	case UNGE: cond = UNLE; std::swap (op0, op1); break;
+	case UNGT: cond = UNLT; std::swap (op0, op1); break;
+	case LE: unspec = UNSPEC_LSX_VFCMP_SLE; break;
+	case LT: unspec = UNSPEC_LSX_VFCMP_SLT; break;
+	case GE: unspec = UNSPEC_LSX_VFCMP_SLE; std::swap (op0, op1); break;
+	case GT: unspec = UNSPEC_LSX_VFCMP_SLT; std::swap (op0, op1); break;
+	default:
+		 gcc_unreachable ();
+	}
+      if (unspec < 0)
+	loongarch_emit_binary (cond, dest, op0, op1);
+      else
+	{
+	  rtx x = gen_rtx_UNSPEC (GET_MODE (dest),
+				  gen_rtvec (2, op0, op1), unspec);
+	  emit_insn (gen_rtx_SET (dest, x));
+	}
+      break;
+
+    default:
+      gcc_unreachable ();
+      break;
+    }
+}
+
+/* Expand VEC_COND_EXPR, where:
+   MODE is mode of the result
+   VIMODE equivalent integer mode
+   OPERANDS operands of VEC_COND_EXPR.  */
+
+void
+loongarch_expand_vec_cond_expr (machine_mode mode, machine_mode vimode,
+				rtx *operands)
+{
+  rtx cond = operands[3];
+  rtx cmp_op0 = operands[4];
+  rtx cmp_op1 = operands[5];
+  rtx cmp_res = gen_reg_rtx (vimode);
+
+  loongarch_expand_lsx_cmp (cmp_res, GET_CODE (cond), cmp_op0, cmp_op1);
+
+  /* We handle the following cases:
+     1) r = a CMP b ? -1 : 0
+     2) r = a CMP b ? -1 : v
+     3) r = a CMP b ?  v : 0
+     4) r = a CMP b ? v1 : v2  */
+
+  /* Case (1) above.  We only move the results.  */
+  if (operands[1] == CONSTM1_RTX (vimode)
+      && operands[2] == CONST0_RTX (vimode))
+    emit_move_insn (operands[0], cmp_res);
+  else
+    {
+      rtx src1 = gen_reg_rtx (vimode);
+      rtx src2 = gen_reg_rtx (vimode);
+      rtx mask = gen_reg_rtx (vimode);
+      rtx bsel;
+
+      /* Move the vector result to use it as a mask.  */
+      emit_move_insn (mask, cmp_res);
+
+      if (register_operand (operands[1], mode))
+	{
+	  rtx xop1 = operands[1];
+	  if (mode != vimode)
+	    {
+	      xop1 = gen_reg_rtx (vimode);
+	      emit_move_insn (xop1, gen_rtx_SUBREG (vimode, operands[1], 0));
+	    }
+	  emit_move_insn (src1, xop1);
+	}
+      else
+	{
+	  gcc_assert (operands[1] == CONSTM1_RTX (vimode));
+	  /* Case (2) if the below doesn't move the mask to src2.  */
+	  emit_move_insn (src1, mask);
+	}
+
+      if (register_operand (operands[2], mode))
+	{
+	  rtx xop2 = operands[2];
+	  if (mode != vimode)
+	    {
+	      xop2 = gen_reg_rtx (vimode);
+	      emit_move_insn (xop2, gen_rtx_SUBREG (vimode, operands[2], 0));
+	    }
+	  emit_move_insn (src2, xop2);
+	}
+      else
+	{
+	  gcc_assert (operands[2] == CONST0_RTX (mode));
+	  /* Case (3) if the above didn't move the mask to src1.  */
+	  emit_move_insn (src2, mask);
+	}
+
+      /* We deal with case (4) if the mask wasn't moved to either src1 or src2.
+	 In any case, we eventually do vector mask-based copy.  */
+      bsel = gen_rtx_IOR (vimode,
+			  gen_rtx_AND (vimode,
+				       gen_rtx_NOT (vimode, mask), src2),
+			  gen_rtx_AND (vimode, mask, src1));
+      /* The result is placed back to a register with the mask.  */
+      emit_insn (gen_rtx_SET (mask, bsel));
+      emit_move_insn (operands[0], gen_rtx_SUBREG (mode, mask, 0));
+    }
+}
+
+void
+loongarch_expand_vec_cond_mask_expr (machine_mode mode, machine_mode vimode,
+				    rtx *operands)
+{
+  rtx cmp_res = operands[3];
+
+  /* We handle the following cases:
+     1) r = a CMP b ? -1 : 0
+     2) r = a CMP b ? -1 : v
+     3) r = a CMP b ?  v : 0
+     4) r = a CMP b ? v1 : v2  */
+
+  /* Case (1) above.  We only move the results.  */
+  if (operands[1] == CONSTM1_RTX (vimode)
+      && operands[2] == CONST0_RTX (vimode))
+    emit_move_insn (operands[0], cmp_res);
+  else
+    {
+      rtx src1 = gen_reg_rtx (vimode);
+      rtx src2 = gen_reg_rtx (vimode);
+      rtx mask = gen_reg_rtx (vimode);
+      rtx bsel;
+
+      /* Move the vector result to use it as a mask.  */
+      emit_move_insn (mask, cmp_res);
+
+      if (register_operand (operands[1], mode))
+	{
+	  rtx xop1 = operands[1];
+	  if (mode != vimode)
+	    {
+	      xop1 = gen_reg_rtx (vimode);
+	      emit_move_insn (xop1, gen_rtx_SUBREG (vimode, operands[1], 0));
+	    }
+	  emit_move_insn (src1, xop1);
+	}
+      else
+	{
+	  gcc_assert (operands[1] == CONSTM1_RTX (vimode));
+	  /* Case (2) if the below doesn't move the mask to src2.  */
+	  emit_move_insn (src1, mask);
+	}
+
+      if (register_operand (operands[2], mode))
+	{
+	  rtx xop2 = operands[2];
+	  if (mode != vimode)
+	    {
+	      xop2 = gen_reg_rtx (vimode);
+	      emit_move_insn (xop2, gen_rtx_SUBREG (vimode, operands[2], 0));
+	    }
+	  emit_move_insn (src2, xop2);
+	}
+      else
+	{
+	  gcc_assert (operands[2] == CONST0_RTX (mode));
+	  /* Case (3) if the above didn't move the mask to src1.  */
+	  emit_move_insn (src2, mask);
+	}
+
+      /* We deal with case (4) if the mask wasn't moved to either src1 or src2.
+	 In any case, we eventually do vector mask-based copy.  */
+      bsel = gen_rtx_IOR (vimode,
+			  gen_rtx_AND (vimode,
+				       gen_rtx_NOT (vimode, mask), src2),
+			  gen_rtx_AND (vimode, mask, src1));
+      /* The result is placed back to a register with the mask.  */
+      emit_insn (gen_rtx_SET (mask, bsel));
+      emit_move_insn (operands[0], gen_rtx_SUBREG (mode, mask, 0));
+    }
+}
+
+/* Expand integer vector comparison */
+bool
+loongarch_expand_vec_cmp (rtx operands[])
+{
+
+  rtx_code code = GET_CODE (operands[1]);
+  loongarch_expand_lsx_cmp (operands[0], code, operands[2], operands[3]);
+  return true;
+}
+
+/* Implement TARGET_CASE_VALUES_THRESHOLD.  */
+
+unsigned int
+loongarch_case_values_threshold (void)
+{
+  return default_case_values_threshold ();
+}
+
+/* Implement TARGET_SPILL_CLASS.  */
+
+static reg_class_t
+loongarch_spill_class (reg_class_t rclass ATTRIBUTE_UNUSED,
+		       machine_mode mode ATTRIBUTE_UNUSED)
+{
+  return NO_REGS;
+}
+
+/* Implement TARGET_PROMOTE_FUNCTION_MODE.  */
+
+/* This function is equivalent to default_promote_function_mode_always_promote
+   except that it returns a promoted mode even if type is NULL_TREE.  This is
+   needed by libcalls which have no type (only a mode) such as fixed conversion
+   routines that take a signed or unsigned char/short argument and convert it
+   to a fixed type.  */
+
+static machine_mode
+loongarch_promote_function_mode (const_tree type ATTRIBUTE_UNUSED,
+				 machine_mode mode,
+				 int *punsignedp ATTRIBUTE_UNUSED,
+				 const_tree fntype ATTRIBUTE_UNUSED,
+				 int for_return ATTRIBUTE_UNUSED)
+{
+  int unsignedp;
+
+  if (type != NULL_TREE)
+    return promote_mode (type, mode, punsignedp);
+
+  unsignedp = *punsignedp;
+  PROMOTE_MODE (mode, unsignedp, type);
+  *punsignedp = unsignedp;
+  return mode;
+}
+
+/* Implement TARGET_STARTING_FRAME_OFFSET.  See loongarch_compute_frame_info
+   for details about the frame layout.  */
+
+static HOST_WIDE_INT
+loongarch_starting_frame_offset (void)
+{
+  if (FRAME_GROWS_DOWNWARD)
+    return 0;
+  return crtl->outgoing_args_size;
+}
+
+/* A subroutine of loongarch_build_signbit_mask.  If VECT is true,
+   then replicate the value for all elements of the vector
+   register.  */
+
+rtx
+loongarch_build_const_vector (machine_mode mode, bool vect, rtx value)
+{
+  int i, n_elt;
+  rtvec v;
+  machine_mode scalar_mode;
+
+  switch (mode)
+    {
+    case E_V32QImode:
+    case E_V16QImode:
+    case E_V32HImode:
+    case E_V16HImode:
+    case E_V8HImode:
+    case E_V8SImode:
+    case E_V4SImode:
+    case E_V8DImode:
+    case E_V4DImode:
+    case E_V2DImode:
+      gcc_assert (vect);
+      /* FALLTHRU */
+    case E_V8SFmode:
+    case E_V4SFmode:
+    case E_V8DFmode:
+    case E_V4DFmode:
+    case E_V2DFmode:
+      n_elt = GET_MODE_NUNITS (mode);
+      v = rtvec_alloc (n_elt);
+      scalar_mode = GET_MODE_INNER (mode);
+
+      RTVEC_ELT (v, 0) = value;
+
+      for (i = 1; i < n_elt; ++i)
+	RTVEC_ELT (v, i) = vect ? value : CONST0_RTX (scalar_mode);
+
+      return gen_rtx_CONST_VECTOR (mode, v);
+
+    default:
+      gcc_unreachable ();
+    }
+}
+
+/* Create a mask for the sign bit in MODE
+   for an register.  If VECT is true, then replicate the mask for
+   all elements of the vector register.  If INVERT is true, then create
+   a mask excluding the sign bit.  */
+
+rtx
+loongarch_build_signbit_mask (machine_mode mode, bool vect, bool invert)
+{
+  machine_mode vec_mode, imode;
+  wide_int w;
+  rtx mask, v;
+
+  switch (mode)
+    {
+    case E_V16SImode:
+    case E_V16SFmode:
+    case E_V8SImode:
+    case E_V4SImode:
+    case E_V8SFmode:
+    case E_V4SFmode:
+      vec_mode = mode;
+      imode = SImode;
+      break;
+
+    case E_V8DImode:
+    case E_V4DImode:
+    case E_V2DImode:
+    case E_V8DFmode:
+    case E_V4DFmode:
+    case E_V2DFmode:
+      vec_mode = mode;
+      imode = DImode;
+      break;
+
+    case E_TImode:
+    case E_TFmode:
+      vec_mode = VOIDmode;
+      imode = TImode;
+      break;
+
+    default:
+      gcc_unreachable ();
+    }
+
+  machine_mode inner_mode = GET_MODE_INNER (mode);
+  w = wi::set_bit_in_zero (GET_MODE_BITSIZE (inner_mode) - 1,
+			   GET_MODE_BITSIZE (inner_mode));
+  if (invert)
+    w = wi::bit_not (w);
+
+  /* Force this value into the low part of a fp vector constant.  */
+  mask = immed_wide_int_const (w, imode);
+  mask = gen_lowpart (inner_mode, mask);
+
+  if (vec_mode == VOIDmode)
+    return force_reg (inner_mode, mask);
+
+  v = loongarch_build_const_vector (vec_mode, vect, mask);
+  return force_reg (vec_mode, v);
+}
+
+static bool
+loongarch_builtin_support_vector_misalignment (machine_mode mode,
+					       const_tree type,
+					       int misalignment,
+					       bool is_packed)
+{
+  if (ISA_HAS_LSX && STRICT_ALIGNMENT)
+    {
+      if (optab_handler (movmisalign_optab, mode) == CODE_FOR_nothing)
+	return false;
+      if (misalignment == -1)
+	return false;
+    }
+  return default_builtin_support_vector_misalignment (mode, type, misalignment,
+						      is_packed);
+}
+
+/* Initialize the GCC target structure.  */
+#undef TARGET_ASM_ALIGNED_HI_OP
+#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
+#undef TARGET_ASM_ALIGNED_SI_OP
+#define TARGET_ASM_ALIGNED_SI_OP "\t.word\t"
+#undef TARGET_ASM_ALIGNED_DI_OP
+#define TARGET_ASM_ALIGNED_DI_OP "\t.dword\t"
+
+#undef TARGET_OPTION_OVERRIDE
+#define TARGET_OPTION_OVERRIDE loongarch_option_override
+
+#undef TARGET_LEGITIMIZE_ADDRESS
+#define TARGET_LEGITIMIZE_ADDRESS loongarch_legitimize_address
+
+#undef TARGET_ASM_SELECT_RTX_SECTION
+#define TARGET_ASM_SELECT_RTX_SECTION loongarch_select_rtx_section
+#undef TARGET_ASM_FUNCTION_RODATA_SECTION
+#define TARGET_ASM_FUNCTION_RODATA_SECTION loongarch_function_rodata_section
+
+#undef TARGET_SCHED_INIT
+#define TARGET_SCHED_INIT loongarch_sched_init
+#undef TARGET_SCHED_REORDER
+#define TARGET_SCHED_REORDER loongarch_sched_reorder
+#undef TARGET_SCHED_REORDER2
+#define TARGET_SCHED_REORDER2 loongarch_sched_reorder2
+#undef TARGET_SCHED_VARIABLE_ISSUE
+#define TARGET_SCHED_VARIABLE_ISSUE loongarch_variable_issue
+#undef TARGET_SCHED_ADJUST_COST
+#define TARGET_SCHED_ADJUST_COST loongarch_adjust_cost
+#undef TARGET_SCHED_ISSUE_RATE
+#define TARGET_SCHED_ISSUE_RATE loongarch_issue_rate
+#undef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD
+#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD \
+  loongarch_multipass_dfa_lookahead
+
+#undef TARGET_FUNCTION_OK_FOR_SIBCALL
+#define TARGET_FUNCTION_OK_FOR_SIBCALL loongarch_function_ok_for_sibcall
+
+#undef TARGET_VALID_POINTER_MODE
+#define TARGET_VALID_POINTER_MODE loongarch_valid_pointer_mode
+#undef TARGET_REGISTER_MOVE_COST
+#define TARGET_REGISTER_MOVE_COST loongarch_register_move_cost
+#undef TARGET_MEMORY_MOVE_COST
+#define TARGET_MEMORY_MOVE_COST loongarch_memory_move_cost
+#undef TARGET_RTX_COSTS
+#define TARGET_RTX_COSTS loongarch_rtx_costs
+#undef TARGET_ADDRESS_COST
+#define TARGET_ADDRESS_COST loongarch_address_cost
+#undef TARGET_VECTORIZE_BUILTIN_VECTORIZATION_COST
+#define TARGET_VECTORIZE_BUILTIN_VECTORIZATION_COST \
+  loongarch_builtin_vectorization_cost
+
+
+#undef TARGET_IN_SMALL_DATA_P
+#define TARGET_IN_SMALL_DATA_P loongarch_in_small_data_p
+
+#undef TARGET_PREFERRED_RELOAD_CLASS
+#define TARGET_PREFERRED_RELOAD_CLASS loongarch_preferred_reload_class
+
+#undef TARGET_ASM_FILE_START_FILE_DIRECTIVE
+#define TARGET_ASM_FILE_START_FILE_DIRECTIVE true
+
+#undef TARGET_EXPAND_BUILTIN_VA_START
+#define TARGET_EXPAND_BUILTIN_VA_START loongarch_va_start
+
+#undef TARGET_PROMOTE_FUNCTION_MODE
+#define TARGET_PROMOTE_FUNCTION_MODE loongarch_promote_function_mode
+#undef TARGET_RETURN_IN_MEMORY
+#define TARGET_RETURN_IN_MEMORY loongarch_return_in_memory
+
+#undef TARGET_FUNCTION_VALUE
+#define TARGET_FUNCTION_VALUE loongarch_function_value
+#undef TARGET_LIBCALL_VALUE
+#define TARGET_LIBCALL_VALUE loongarch_libcall_value
+
+#undef TARGET_ASM_OUTPUT_MI_THUNK
+#define TARGET_ASM_OUTPUT_MI_THUNK loongarch_output_mi_thunk
+#undef TARGET_ASM_CAN_OUTPUT_MI_THUNK
+#define TARGET_ASM_CAN_OUTPUT_MI_THUNK \
+  hook_bool_const_tree_hwi_hwi_const_tree_true
+
+#undef TARGET_PRINT_OPERAND
+#define TARGET_PRINT_OPERAND loongarch_print_operand
+#undef TARGET_PRINT_OPERAND_ADDRESS
+#define TARGET_PRINT_OPERAND_ADDRESS loongarch_print_operand_address
+#undef TARGET_PRINT_OPERAND_PUNCT_VALID_P
+#define TARGET_PRINT_OPERAND_PUNCT_VALID_P \
+  loongarch_print_operand_punct_valid_p
+
+#undef TARGET_SETUP_INCOMING_VARARGS
+#define TARGET_SETUP_INCOMING_VARARGS loongarch_setup_incoming_varargs
+#undef TARGET_STRICT_ARGUMENT_NAMING
+#define TARGET_STRICT_ARGUMENT_NAMING hook_bool_CUMULATIVE_ARGS_true
+#undef TARGET_MUST_PASS_IN_STACK
+#define TARGET_MUST_PASS_IN_STACK must_pass_in_stack_var_size
+#undef TARGET_PASS_BY_REFERENCE
+#define TARGET_PASS_BY_REFERENCE loongarch_pass_by_reference
+#undef TARGET_ARG_PARTIAL_BYTES
+#define TARGET_ARG_PARTIAL_BYTES loongarch_arg_partial_bytes
+#undef TARGET_FUNCTION_ARG
+#define TARGET_FUNCTION_ARG loongarch_function_arg
+#undef TARGET_FUNCTION_ARG_ADVANCE
+#define TARGET_FUNCTION_ARG_ADVANCE loongarch_function_arg_advance
+#undef TARGET_FUNCTION_ARG_BOUNDARY
+#define TARGET_FUNCTION_ARG_BOUNDARY loongarch_function_arg_boundary
+
+#undef TARGET_VECTOR_MODE_SUPPORTED_P
+#define TARGET_VECTOR_MODE_SUPPORTED_P loongarch_vector_mode_supported_p
+
+#undef TARGET_SCALAR_MODE_SUPPORTED_P
+#define TARGET_SCALAR_MODE_SUPPORTED_P loongarch_scalar_mode_supported_p
+
+#undef TARGET_VECTORIZE_PREFERRED_SIMD_MODE
+#define TARGET_VECTORIZE_PREFERRED_SIMD_MODE loongarch_preferred_simd_mode
+
+#undef TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES
+#define TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES \
+  loongarch_autovectorize_vector_modes
+
+#undef TARGET_INIT_BUILTINS
+#define TARGET_INIT_BUILTINS loongarch_init_builtins
 #undef TARGET_BUILTIN_DECL
 #define TARGET_BUILTIN_DECL loongarch_builtin_decl
 #undef TARGET_EXPAND_BUILTIN
@@ -6941,6 +8818,14 @@ loongarch_set_handled_components (sbitmap components)
 
 #undef TARGET_MAX_ANCHOR_OFFSET
 #define TARGET_MAX_ANCHOR_OFFSET (IMM_REACH/2-1)
+#undef TARGET_VECTORIZE_VEC_PERM_CONST
+#define TARGET_VECTORIZE_VEC_PERM_CONST loongarch_vectorize_vec_perm_const
+
+#undef TARGET_SCHED_REASSOCIATION_WIDTH
+#define TARGET_SCHED_REASSOCIATION_WIDTH loongarch_sched_reassociation_width
+
+#undef TARGET_CASE_VALUES_THRESHOLD
+#define TARGET_CASE_VALUES_THRESHOLD loongarch_case_values_threshold
 
 #undef TARGET_ATOMIC_ASSIGN_EXPAND_FENV
 #define TARGET_ATOMIC_ASSIGN_EXPAND_FENV loongarch_atomic_assign_expand_fenv
@@ -6959,6 +8844,10 @@ loongarch_set_handled_components (sbitmap components)
 #undef TARGET_MODES_TIEABLE_P
 #define TARGET_MODES_TIEABLE_P loongarch_modes_tieable_p
 
+#undef TARGET_HARD_REGNO_CALL_PART_CLOBBERED
+#define TARGET_HARD_REGNO_CALL_PART_CLOBBERED \
+  loongarch_hard_regno_call_part_clobbered
+
 #undef TARGET_CUSTOM_FUNCTION_DESCRIPTORS
 #define TARGET_CUSTOM_FUNCTION_DESCRIPTORS 2
 
@@ -7009,6 +8898,10 @@ loongarch_set_handled_components (sbitmap components)
 #define TARGET_SHRINK_WRAP_SET_HANDLED_COMPONENTS \
   loongarch_set_handled_components
 
+#undef TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT
+#define TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT \
+  loongarch_builtin_support_vector_misalignment
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-loongarch.h"
diff --git a/gcc/config/loongarch/loongarch.h b/gcc/config/loongarch/loongarch.h
index eca723293a1..e939dd826d1 100644
--- a/gcc/config/loongarch/loongarch.h
+++ b/gcc/config/loongarch/loongarch.h
@@ -23,6 +23,8 @@ along with GCC; see the file COPYING3.  If not see
 
 #include "config/loongarch/loongarch-opts.h"
 
+#define TARGET_SUPPORTS_WIDE_INT 1
+
 /* Macros to silence warnings about numbers being signed in traditional
    C and unsigned in ISO C when compiled on 32-bit hosts.  */
 
@@ -179,6 +181,11 @@ along with GCC; see the file COPYING3.  If not see
 #define MIN_UNITS_PER_WORD 4
 #endif
 
+/* Width of a LSX vector register in bytes.  */
+#define UNITS_PER_LSX_REG 16
+/* Width of a LSX vector register in bits.  */
+#define BITS_PER_LSX_REG (UNITS_PER_LSX_REG * BITS_PER_UNIT)
+
 /* For LARCH, width of a floating point register.  */
 #define UNITS_PER_FPREG (TARGET_DOUBLE_FLOAT ? 8 : 4)
 
@@ -241,8 +248,10 @@ along with GCC; see the file COPYING3.  If not see
 #define STRUCTURE_SIZE_BOUNDARY 8
 
 /* There is no point aligning anything to a rounder boundary than
-   LONG_DOUBLE_TYPE_SIZE.  */
-#define BIGGEST_ALIGNMENT (LONG_DOUBLE_TYPE_SIZE)
+   LONG_DOUBLE_TYPE_SIZE, unless under LSX the bigggest alignment is
+   BITS_PER_LSX_REG/..  */
+#define BIGGEST_ALIGNMENT \
+  (ISA_HAS_LSX ? BITS_PER_LSX_REG : LONG_DOUBLE_TYPE_SIZE)
 
 /* All accesses must be aligned.  */
 #define STRICT_ALIGNMENT (TARGET_STRICT_ALIGN)
@@ -378,6 +387,9 @@ along with GCC; see the file COPYING3.  If not see
 #define FP_REG_FIRST 32
 #define FP_REG_LAST 63
 #define FP_REG_NUM (FP_REG_LAST - FP_REG_FIRST + 1)
+#define LSX_REG_FIRST FP_REG_FIRST
+#define LSX_REG_LAST  FP_REG_LAST
+#define LSX_REG_NUM   FP_REG_NUM
 
 /* The DWARF 2 CFA column which tracks the return address from a
    signal handler context.  This means that to maintain backwards
@@ -395,8 +407,11 @@ along with GCC; see the file COPYING3.  If not see
   ((unsigned int) ((int) (REGNO) - FP_REG_FIRST) < FP_REG_NUM)
 #define FCC_REG_P(REGNO) \
   ((unsigned int) ((int) (REGNO) - FCC_REG_FIRST) < FCC_REG_NUM)
+#define LSX_REG_P(REGNO) \
+  ((unsigned int) ((int) (REGNO) - LSX_REG_FIRST) < LSX_REG_NUM)
 
 #define FP_REG_RTX_P(X) (REG_P (X) && FP_REG_P (REGNO (X)))
+#define LSX_REG_RTX_P(X) (REG_P (X) && LSX_REG_P (REGNO (X)))
 
 /* Select a register mode required for caller save of hard regno REGNO.  */
 #define HARD_REGNO_CALLER_SAVE_MODE(REGNO, NREGS, MODE) \
@@ -577,6 +592,11 @@ enum reg_class
 #define IMM12_OPERAND(VALUE) \
   ((unsigned HOST_WIDE_INT) (VALUE) + IMM_REACH / 2 < IMM_REACH)
 
+/* True if VALUE is a signed 13-bit number.  */
+
+#define IMM13_OPERAND(VALUE) \
+  ((unsigned HOST_WIDE_INT) (VALUE) + 0x1000 < 0x2000)
+
 /* True if VALUE is a signed 16-bit number.  */
 
 #define IMM16_OPERAND(VALUE) \
@@ -706,6 +726,13 @@ enum reg_class
 #define FP_ARG_FIRST (FP_REG_FIRST + 0)
 #define FP_ARG_LAST (FP_ARG_FIRST + MAX_ARGS_IN_REGISTERS - 1)
 
+/* True if MODE is vector and supported in a LSX vector register.  */
+#define LSX_SUPPORTED_MODE_P(MODE)			\
+  (ISA_HAS_LSX						\
+   && GET_MODE_SIZE (MODE) == UNITS_PER_LSX_REG		\
+   && (GET_MODE_CLASS (MODE) == MODE_VECTOR_INT		\
+       || GET_MODE_CLASS (MODE) == MODE_VECTOR_FLOAT))
+
 /* 1 if N is a possible register number for function argument passing.
    We have no FP argument registers when soft-float.  */
 
@@ -926,7 +953,39 @@ typedef struct {
   { "s7",	30 + GP_REG_FIRST },					\
   { "s8",	31 + GP_REG_FIRST },					\
   { "v0",	 4 + GP_REG_FIRST },					\
-  { "v1",	 5 + GP_REG_FIRST }					\
+  { "v1",	 5 + GP_REG_FIRST },					\
+  { "vr0",	 0 + FP_REG_FIRST },					\
+  { "vr1",	 1 + FP_REG_FIRST },					\
+  { "vr2",	 2 + FP_REG_FIRST },					\
+  { "vr3",	 3 + FP_REG_FIRST },					\
+  { "vr4",	 4 + FP_REG_FIRST },					\
+  { "vr5",	 5 + FP_REG_FIRST },					\
+  { "vr6",	 6 + FP_REG_FIRST },					\
+  { "vr7",	 7 + FP_REG_FIRST },					\
+  { "vr8",	 8 + FP_REG_FIRST },					\
+  { "vr9",	 9 + FP_REG_FIRST },					\
+  { "vr10",	10 + FP_REG_FIRST },					\
+  { "vr11",	11 + FP_REG_FIRST },					\
+  { "vr12",	12 + FP_REG_FIRST },					\
+  { "vr13",	13 + FP_REG_FIRST },					\
+  { "vr14",	14 + FP_REG_FIRST },					\
+  { "vr15",	15 + FP_REG_FIRST },					\
+  { "vr16",	16 + FP_REG_FIRST },					\
+  { "vr17",	17 + FP_REG_FIRST },					\
+  { "vr18",	18 + FP_REG_FIRST },					\
+  { "vr19",	19 + FP_REG_FIRST },					\
+  { "vr20",	20 + FP_REG_FIRST },					\
+  { "vr21",	21 + FP_REG_FIRST },					\
+  { "vr22",	22 + FP_REG_FIRST },					\
+  { "vr23",	23 + FP_REG_FIRST },					\
+  { "vr24",	24 + FP_REG_FIRST },					\
+  { "vr25",	25 + FP_REG_FIRST },					\
+  { "vr26",	26 + FP_REG_FIRST },					\
+  { "vr27",	27 + FP_REG_FIRST },					\
+  { "vr28",	28 + FP_REG_FIRST },					\
+  { "vr29",	29 + FP_REG_FIRST },					\
+  { "vr30",	30 + FP_REG_FIRST },					\
+  { "vr31",	31 + FP_REG_FIRST }					\
 }
 
 /* Globalizing directive for a label.  */
diff --git a/gcc/config/loongarch/loongarch.md b/gcc/config/loongarch/loongarch.md
index b37e070660f..7b8978e2533 100644
--- a/gcc/config/loongarch/loongarch.md
+++ b/gcc/config/loongarch/loongarch.md
@@ -158,11 +158,12 @@ (define_attr "move_type"
    const,signext,pick_ins,logical,arith,sll0,andi,shift_shift"
   (const_string "unknown"))
 
-(define_attr "alu_type" "unknown,add,sub,not,nor,and,or,xor"
+(define_attr "alu_type" "unknown,add,sub,not,nor,and,or,xor,simd_add"
   (const_string "unknown"))
 
 ;; Main data type used by the insn
-(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF,FCC"
+(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF,FCC,
+  V2DI,V4SI,V8HI,V16QI,V2DF,V4SF"
   (const_string "unknown"))
 
 ;; True if the main data type is twice the size of a word.
@@ -234,7 +235,12 @@ (define_attr "type"
    prefetch,prefetchx,condmove,mgtf,mftg,const,arith,logical,
    shift,slt,signext,clz,trap,imul,idiv,move,
    fmove,fadd,fmul,fmadd,fdiv,frdiv,fabs,flogb,fneg,fcmp,fcopysign,fcvt,
-   fscaleb,fsqrt,frsqrt,accext,accmod,multi,atomic,syncloop,nop,ghost"
+   fscaleb,fsqrt,frsqrt,accext,accmod,multi,atomic,syncloop,nop,ghost,
+   simd_div,simd_fclass,simd_flog2,simd_fadd,simd_fcvt,simd_fmul,simd_fmadd,
+   simd_fdiv,simd_bitins,simd_bitmov,simd_insert,simd_sld,simd_mul,simd_fcmp,
+   simd_fexp2,simd_int_arith,simd_bit,simd_shift,simd_splat,simd_fill,
+   simd_permute,simd_shf,simd_sat,simd_pcnt,simd_copy,simd_branch,simd_clsx,
+   simd_fminmax,simd_logic,simd_move,simd_load,simd_store"
   (cond [(eq_attr "jirl" "!unset") (const_string "call")
 	 (eq_attr "got" "load") (const_string "load")
 
@@ -414,11 +420,20 @@ (define_mode_attr ifmt [(SI "w") (DI "l")])
 
 ;; This attribute gives the upper-case mode name for one unit of a
 ;; floating-point mode or vector mode.
-(define_mode_attr UNITMODE [(SF "SF") (DF "DF")])
+(define_mode_attr UNITMODE [(SF "SF") (DF "DF") (V2SF "SF") (V4SF "SF")
+			    (V16QI "QI") (V8HI "HI") (V4SI "SI") (V2DI "DI")
+			    (V2DF "DF")])
+
+;; As above, but in lower case.
+(define_mode_attr unitmode [(SF "sf") (DF "df") (V2SF "sf") (V4SF "sf")
+			    (V16QI "qi") (V8QI "qi") (V8HI "hi") (V4HI "hi")
+			    (V4SI "si") (V2SI "si") (V2DI "di") (V2DF "df")])
 
 ;; This attribute gives the integer mode that has half the size of
 ;; the controlling mode.
-(define_mode_attr HALFMODE [(DF "SI") (DI "SI") (TF "DI")])
+(define_mode_attr HALFMODE [(DF "SI") (DI "SI") (V2SF "SI")
+			    (V2SI "SI") (V4HI "SI") (V8QI "SI")
+			    (TF "DI")])
 
 ;; This attribute gives the integer mode that has the same size of a
 ;; floating-point mode.
@@ -445,6 +460,18 @@ (define_code_iterator neg_bitwise [and ior])
 ;; from the same template.
 (define_code_iterator any_div [div udiv mod umod])
 
+;; This code iterator allows addition and subtraction to be generated
+;; from the same template.
+(define_code_iterator addsub [plus minus])
+
+;; This code iterator allows addition and multiplication to be generated
+;; from the same template.
+(define_code_iterator addmul [plus mult])
+
+;; This code iterator allows addition subtraction and multiplication to be
+;; generated from the same template
+(define_code_iterator addsubmul [plus minus mult])
+
 ;; This code iterator allows all native floating-point comparisons to be
 ;; generated from the same template.
 (define_code_iterator fcond [unordered uneq unlt unle eq lt le
@@ -684,7 +711,6 @@ (define_insn "sub<mode>3"
   [(set_attr "alu_type" "sub")
    (set_attr "mode" "<MODE>")])
 
-
 (define_insn "*subsi3_extended"
   [(set (match_operand:DI 0 "register_operand" "= r")
 	(sign_extend:DI
@@ -1228,7 +1254,7 @@ (define_insn "smina<mode>3"
   "fmina.<fmt>\t%0,%1,%2"
   [(set_attr "type" "fmove")
    (set_attr "mode" "<MODE>")])
-\f
+
 ;;
 ;;  ....................
 ;;
@@ -2541,7 +2567,6 @@ (define_insn "rotr<mode>3"
   [(set_attr "type" "shift,shift")
    (set_attr "mode" "<MODE>")])
 
-\f
 ;; The following templates were added to generate "bstrpick.d + alsl.d"
 ;; instruction pairs.
 ;; It is required that the values of const_immalsl_operand and
@@ -3606,6 +3631,9 @@ (define_insn "loongarch_crcc_w_<size>_w"
 (include "generic.md")
 (include "la464.md")
 
+; The LoongArch SX Instructions.
+(include "lsx.md")
+
 (define_c_enum "unspec" [
   UNSPEC_ADDRESS_FIRST
 ])
diff --git a/gcc/config/loongarch/lsx.md b/gcc/config/loongarch/lsx.md
new file mode 100644
index 00000000000..dc60f9517e7
--- /dev/null
+++ b/gcc/config/loongarch/lsx.md
@@ -0,0 +1,4479 @@
+;; Machine Description for LARCH Loongson SX ASE
+;;
+;; Copyright (C) 2018 Free Software Foundation, Inc.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+;;
+
+(define_c_enum "unspec" [
+  UNSPEC_LSX_VABSD_U
+  UNSPEC_LSX_VAVG_S
+  UNSPEC_LSX_VAVG_U
+  UNSPEC_LSX_VAVGR_S
+  UNSPEC_LSX_VAVGR_U
+  UNSPEC_LSX_VBITCLR
+  UNSPEC_LSX_VBITCLRI
+  UNSPEC_LSX_VBITREV
+  UNSPEC_LSX_VBITREVI
+  UNSPEC_LSX_VBITSET
+  UNSPEC_LSX_VBITSETI
+  UNSPEC_LSX_BRANCH_V
+  UNSPEC_LSX_BRANCH
+  UNSPEC_LSX_VFCMP_CAF
+  UNSPEC_LSX_VFCLASS
+  UNSPEC_LSX_VFCMP_CUNE
+  UNSPEC_LSX_VFCVT
+  UNSPEC_LSX_VFCVTH
+  UNSPEC_LSX_VFCVTL
+  UNSPEC_LSX_VFLOGB
+  UNSPEC_LSX_VFRECIP
+  UNSPEC_LSX_VFRINT
+  UNSPEC_LSX_VFRSQRT
+  UNSPEC_LSX_VFCMP_SAF
+  UNSPEC_LSX_VFCMP_SEQ
+  UNSPEC_LSX_VFCMP_SLE
+  UNSPEC_LSX_VFCMP_SLT
+  UNSPEC_LSX_VFCMP_SNE
+  UNSPEC_LSX_VFCMP_SOR
+  UNSPEC_LSX_VFCMP_SUEQ
+  UNSPEC_LSX_VFCMP_SULE
+  UNSPEC_LSX_VFCMP_SULT
+  UNSPEC_LSX_VFCMP_SUN
+  UNSPEC_LSX_VFCMP_SUNE
+  UNSPEC_LSX_VFTINT_S
+  UNSPEC_LSX_VFTINT_U
+  UNSPEC_LSX_VSAT_S
+  UNSPEC_LSX_VSAT_U
+  UNSPEC_LSX_VREPLVEI
+  UNSPEC_LSX_VSRAR
+  UNSPEC_LSX_VSRARI
+  UNSPEC_LSX_VSRLR
+  UNSPEC_LSX_VSRLRI
+  UNSPEC_LSX_VSHUF
+  UNSPEC_LSX_VMUH_S
+  UNSPEC_LSX_VMUH_U
+  UNSPEC_LSX_VEXTW_S
+  UNSPEC_LSX_VEXTW_U
+  UNSPEC_LSX_VSLLWIL_S
+  UNSPEC_LSX_VSLLWIL_U
+  UNSPEC_LSX_VSRAN
+  UNSPEC_LSX_VSSRAN_S
+  UNSPEC_LSX_VSSRAN_U
+  UNSPEC_LSX_VSRAIN
+  UNSPEC_LSX_VSRAINS_S
+  UNSPEC_LSX_VSRAINS_U
+  UNSPEC_LSX_VSRARN
+  UNSPEC_LSX_VSRLN
+  UNSPEC_LSX_VSRLRN
+  UNSPEC_LSX_VSSRLRN_U
+  UNSPEC_LSX_VFRSTPI
+  UNSPEC_LSX_VFRSTP
+  UNSPEC_LSX_VSHUF4I
+  UNSPEC_LSX_VBSRL_V
+  UNSPEC_LSX_VBSLL_V
+  UNSPEC_LSX_VEXTRINS
+  UNSPEC_LSX_VMSKLTZ
+  UNSPEC_LSX_VSIGNCOV
+  UNSPEC_LSX_VFTINTRNE
+  UNSPEC_LSX_VFTINTRP
+  UNSPEC_LSX_VFTINTRM
+  UNSPEC_LSX_VFTINT_W_D
+  UNSPEC_LSX_VFFINT_S_L
+  UNSPEC_LSX_VFTINTRZ_W_D
+  UNSPEC_LSX_VFTINTRP_W_D
+  UNSPEC_LSX_VFTINTRM_W_D
+  UNSPEC_LSX_VFTINTRNE_W_D
+  UNSPEC_LSX_VFTINTL_L_S
+  UNSPEC_LSX_VFFINTH_D_W
+  UNSPEC_LSX_VFFINTL_D_W
+  UNSPEC_LSX_VFTINTRZL_L_S
+  UNSPEC_LSX_VFTINTRZH_L_S
+  UNSPEC_LSX_VFTINTRPL_L_S
+  UNSPEC_LSX_VFTINTRPH_L_S
+  UNSPEC_LSX_VFTINTRMH_L_S
+  UNSPEC_LSX_VFTINTRML_L_S
+  UNSPEC_LSX_VFTINTRNEL_L_S
+  UNSPEC_LSX_VFTINTRNEH_L_S
+  UNSPEC_LSX_VFTINTH_L_H
+  UNSPEC_LSX_VFRINTRNE_S
+  UNSPEC_LSX_VFRINTRNE_D
+  UNSPEC_LSX_VFRINTRZ_S
+  UNSPEC_LSX_VFRINTRZ_D
+  UNSPEC_LSX_VFRINTRP_S
+  UNSPEC_LSX_VFRINTRP_D
+  UNSPEC_LSX_VFRINTRM_S
+  UNSPEC_LSX_VFRINTRM_D
+  UNSPEC_LSX_VSSRARN_S
+  UNSPEC_LSX_VSSRARN_U
+  UNSPEC_LSX_VSSRLN_U
+  UNSPEC_LSX_VSSRLN
+  UNSPEC_LSX_VSSRLRN
+  UNSPEC_LSX_VLDI
+  UNSPEC_LSX_VSHUF_B
+  UNSPEC_LSX_VLDX
+  UNSPEC_LSX_VSTX
+  UNSPEC_LSX_VEXTL_QU_DU
+  UNSPEC_LSX_VSETEQZ_V
+  UNSPEC_LSX_VADDWEV
+  UNSPEC_LSX_VADDWEV2
+  UNSPEC_LSX_VADDWEV3
+  UNSPEC_LSX_VADDWOD
+  UNSPEC_LSX_VADDWOD2
+  UNSPEC_LSX_VADDWOD3
+  UNSPEC_LSX_VSUBWEV
+  UNSPEC_LSX_VSUBWEV2
+  UNSPEC_LSX_VSUBWOD
+  UNSPEC_LSX_VSUBWOD2
+  UNSPEC_LSX_VMULWEV
+  UNSPEC_LSX_VMULWEV2
+  UNSPEC_LSX_VMULWEV3
+  UNSPEC_LSX_VMULWOD
+  UNSPEC_LSX_VMULWOD2
+  UNSPEC_LSX_VMULWOD3
+  UNSPEC_LSX_VHADDW_Q_D
+  UNSPEC_LSX_VHADDW_QU_DU
+  UNSPEC_LSX_VHSUBW_Q_D
+  UNSPEC_LSX_VHSUBW_QU_DU
+  UNSPEC_LSX_VMADDWEV
+  UNSPEC_LSX_VMADDWEV2
+  UNSPEC_LSX_VMADDWEV3
+  UNSPEC_LSX_VMADDWOD
+  UNSPEC_LSX_VMADDWOD2
+  UNSPEC_LSX_VMADDWOD3
+  UNSPEC_LSX_VROTR
+  UNSPEC_LSX_VADD_Q
+  UNSPEC_LSX_VSUB_Q
+  UNSPEC_LSX_VEXTH_Q_D
+  UNSPEC_LSX_VEXTH_QU_DU
+  UNSPEC_LSX_VMSKGEZ
+  UNSPEC_LSX_VMSKNZ
+  UNSPEC_LSX_VEXTL_Q_D
+  UNSPEC_LSX_VSRLNI
+  UNSPEC_LSX_VSRLRNI
+  UNSPEC_LSX_VSSRLNI
+  UNSPEC_LSX_VSSRLNI2
+  UNSPEC_LSX_VSSRLRNI
+  UNSPEC_LSX_VSSRLRNI2
+  UNSPEC_LSX_VSRANI
+  UNSPEC_LSX_VSRARNI
+  UNSPEC_LSX_VSSRANI
+  UNSPEC_LSX_VSSRANI2
+  UNSPEC_LSX_VSSRARNI
+  UNSPEC_LSX_VSSRARNI2
+  UNSPEC_LSX_VPERMI
+])
+
+;; This attribute gives suffix for integers in VHMODE.
+(define_mode_attr dlsxfmt
+  [(V2DI "q")
+   (V4SI "d")
+   (V8HI "w")
+   (V16QI "h")])
+
+(define_mode_attr dlsxfmt_u
+  [(V2DI "qu")
+   (V4SI "du")
+   (V8HI "wu")
+   (V16QI "hu")])
+
+(define_mode_attr d2lsxfmt
+  [(V4SI "q")
+   (V8HI "d")
+   (V16QI "w")])
+
+(define_mode_attr d2lsxfmt_u
+  [(V4SI "qu")
+   (V8HI "du")
+   (V16QI "wu")])
+
+;; The attribute gives two double modes for vector modes.
+(define_mode_attr VD2MODE
+  [(V4SI "V2DI")
+   (V8HI "V2DI")
+   (V16QI "V4SI")])
+
+;; All vector modes with 128 bits.
+(define_mode_iterator LSX      [V2DF V4SF V2DI V4SI V8HI V16QI])
+
+;; Same as LSX.  Used by vcond to iterate two modes.
+(define_mode_iterator LSX_2    [V2DF V4SF V2DI V4SI V8HI V16QI])
+
+;; Only used for splitting insert_d and copy_{u,s}.d.
+(define_mode_iterator LSX_D    [V2DI V2DF])
+
+;; Only used for copy_{u,s}.w.
+(define_mode_iterator LSX_W    [V4SI V4SF])
+
+;; Only integer modes.
+(define_mode_iterator ILSX     [V2DI V4SI V8HI V16QI])
+
+;; As ILSX but excludes V16QI.
+(define_mode_iterator ILSX_DWH [V2DI V4SI V8HI])
+
+;; As LSX but excludes V16QI.
+(define_mode_iterator LSX_DWH  [V2DF V4SF V2DI V4SI V8HI])
+
+;; As ILSX but excludes V2DI.
+(define_mode_iterator ILSX_WHB [V4SI V8HI V16QI])
+
+;; Only integer modes equal or larger than a word.
+(define_mode_iterator ILSX_DW  [V2DI V4SI])
+
+;; Only integer modes smaller than a word.
+(define_mode_iterator ILSX_HB  [V8HI V16QI])
+
+;;;; Only integer modes for fixed-point madd_q/maddr_q.
+;;(define_mode_iterator ILSX_WH  [V4SI V8HI])
+
+;; Only floating-point modes.
+(define_mode_iterator FLSX     [V2DF V4SF])
+
+;; Only used for immediate set shuffle elements instruction.
+(define_mode_iterator LSX_WHB_W [V4SI V8HI V16QI V4SF])
+
+;; The attribute gives the integer vector mode with same size.
+(define_mode_attr VIMODE
+  [(V2DF "V2DI")
+   (V4SF "V4SI")
+   (V2DI "V2DI")
+   (V4SI "V4SI")
+   (V8HI "V8HI")
+   (V16QI "V16QI")])
+
+;; The attribute gives half modes for vector modes.
+(define_mode_attr VHMODE
+  [(V8HI "V16QI")
+   (V4SI "V8HI")
+   (V2DI "V4SI")])
+
+;; The attribute gives double modes for vector modes.
+(define_mode_attr VDMODE
+  [(V2DI "V2DI")
+   (V4SI "V2DI")
+   (V8HI "V4SI")
+   (V16QI "V8HI")])
+
+;; The attribute gives half modes with same number of elements for vector modes.
+(define_mode_attr VTRUNCMODE
+  [(V8HI "V8QI")
+   (V4SI "V4HI")
+   (V2DI "V2SI")])
+
+;; This attribute gives the mode of the result for "vpickve2gr_b, copy_u_b" etc.
+(define_mode_attr VRES
+  [(V2DF "DF")
+   (V4SF "SF")
+   (V2DI "DI")
+   (V4SI "SI")
+   (V8HI "SI")
+   (V16QI "SI")])
+
+;; Only used with LSX_D iterator.
+(define_mode_attr lsx_d
+  [(V2DI "reg_or_0")
+   (V2DF "register")])
+
+;; This attribute gives the integer vector mode with same size.
+(define_mode_attr mode_i
+  [(V2DF "v2di")
+   (V4SF "v4si")
+   (V2DI "v2di")
+   (V4SI "v4si")
+   (V8HI "v8hi")
+   (V16QI "v16qi")])
+
+;; This attribute gives suffix for LSX instructions.
+(define_mode_attr lsxfmt
+  [(V2DF "d")
+   (V4SF "w")
+   (V2DI "d")
+   (V4SI "w")
+   (V8HI "h")
+   (V16QI "b")])
+
+;; This attribute gives suffix for LSX instructions.
+(define_mode_attr lsxfmt_u
+  [(V2DF "du")
+   (V4SF "wu")
+   (V2DI "du")
+   (V4SI "wu")
+   (V8HI "hu")
+   (V16QI "bu")])
+
+;; This attribute gives suffix for integers in VHMODE.
+(define_mode_attr hlsxfmt
+  [(V2DI "w")
+   (V4SI "h")
+   (V8HI "b")])
+
+;; This attribute gives suffix for integers in VHMODE.
+(define_mode_attr hlsxfmt_u
+  [(V2DI "wu")
+   (V4SI "hu")
+   (V8HI "bu")])
+
+;; This attribute gives define_insn suffix for LSX instructions that need
+;; distinction between integer and floating point.
+(define_mode_attr lsxfmt_f
+  [(V2DF "d_f")
+   (V4SF "w_f")
+   (V2DI "d")
+   (V4SI "w")
+   (V8HI "h")
+   (V16QI "b")])
+
+(define_mode_attr flsxfmt_f
+  [(V2DF "d_f")
+   (V4SF "s_f")
+   (V2DI "d")
+   (V4SI "w")
+   (V8HI "h")
+   (V16QI "b")])
+
+(define_mode_attr flsxfmt
+  [(V2DF "d")
+   (V4SF "s")
+   (V2DI "d")
+   (V4SI "s")])
+
+(define_mode_attr flsxfrint
+  [(V2DF "d")
+   (V4SF "s")])
+
+(define_mode_attr ilsxfmt
+  [(V2DF "l")
+   (V4SF "w")])
+
+(define_mode_attr ilsxfmt_u
+  [(V2DF "lu")
+   (V4SF "wu")])
+
+;; This is used to form an immediate operand constraint using
+;; "const_<indeximm>_operand".
+(define_mode_attr indeximm
+  [(V2DF "0_or_1")
+   (V4SF "0_to_3")
+   (V2DI "0_or_1")
+   (V4SI "0_to_3")
+   (V8HI "uimm3")
+   (V16QI "uimm4")])
+
+;; This attribute represents bitmask needed for vec_merge using
+;; "const_<bitmask>_operand".
+(define_mode_attr bitmask
+  [(V2DF "exp_2")
+   (V4SF "exp_4")
+   (V2DI "exp_2")
+   (V4SI "exp_4")
+   (V8HI "exp_8")
+   (V16QI "exp_16")])
+
+;; This attribute is used to form an immediate operand constraint using
+;; "const_<bitimm>_operand".
+(define_mode_attr bitimm
+  [(V16QI "uimm3")
+   (V8HI  "uimm4")
+   (V4SI  "uimm5")
+   (V2DI  "uimm6")])
+
+
+(define_int_iterator FRINT_S [UNSPEC_LSX_VFRINTRP_S
+			    UNSPEC_LSX_VFRINTRZ_S
+			    UNSPEC_LSX_VFRINT
+			    UNSPEC_LSX_VFRINTRM_S])
+
+(define_int_iterator FRINT_D [UNSPEC_LSX_VFRINTRP_D
+			    UNSPEC_LSX_VFRINTRZ_D
+			    UNSPEC_LSX_VFRINT
+			    UNSPEC_LSX_VFRINTRM_D])
+
+(define_int_attr frint_pattern_s
+  [(UNSPEC_LSX_VFRINTRP_S  "ceil")
+   (UNSPEC_LSX_VFRINTRZ_S  "btrunc")
+   (UNSPEC_LSX_VFRINT	   "rint")
+   (UNSPEC_LSX_VFRINTRM_S  "floor")])
+
+(define_int_attr frint_pattern_d
+  [(UNSPEC_LSX_VFRINTRP_D  "ceil")
+   (UNSPEC_LSX_VFRINTRZ_D  "btrunc")
+   (UNSPEC_LSX_VFRINT	   "rint")
+   (UNSPEC_LSX_VFRINTRM_D  "floor")])
+
+(define_int_attr frint_suffix
+  [(UNSPEC_LSX_VFRINTRP_S  "rp")
+   (UNSPEC_LSX_VFRINTRP_D  "rp")
+   (UNSPEC_LSX_VFRINTRZ_S  "rz")
+   (UNSPEC_LSX_VFRINTRZ_D  "rz")
+   (UNSPEC_LSX_VFRINT	   "")
+   (UNSPEC_LSX_VFRINTRM_S  "rm")
+   (UNSPEC_LSX_VFRINTRM_D  "rm")])
+
+(define_expand "vec_init<mode><unitmode>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand:LSX 1 "")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vector_init (operands[0], operands[1]);
+  DONE;
+})
+
+;; vpickev pattern with implicit type conversion.
+(define_insn "vec_pack_trunc_<mode>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(vec_concat:<VHMODE>
+	  (truncate:<VTRUNCMODE>
+	    (match_operand:ILSX_DWH 1 "register_operand" "f"))
+	  (truncate:<VTRUNCMODE>
+	    (match_operand:ILSX_DWH 2 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vpickev.<hlsxfmt>\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "vec_unpacks_hi_v4sf"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(float_extend:V2DF
+	  (vec_select:V2SF
+	    (match_operand:V4SF 1 "register_operand" "f")
+	    (match_dup 2))))]
+  "ISA_HAS_LSX"
+{
+  operands[2] = loongarch_lsx_vec_parallel_const_half (V4SFmode,
+      true/*high_p*/);
+})
+
+(define_expand "vec_unpacks_lo_v4sf"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(float_extend:V2DF
+	  (vec_select:V2SF
+	    (match_operand:V4SF 1 "register_operand" "f")
+	    (match_dup 2))))]
+  "ISA_HAS_LSX"
+{
+  operands[2] = loongarch_lsx_vec_parallel_const_half (V4SFmode,
+      false/*high_p*/);
+})
+
+(define_expand "vec_unpacks_hi_<mode>"
+  [(match_operand:<VDMODE> 0 "register_operand")
+   (match_operand:ILSX_WHB 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vec_unpack (operands, false/*unsigned_p*/, true/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_unpacks_lo_<mode>"
+  [(match_operand:<VDMODE> 0 "register_operand")
+   (match_operand:ILSX_WHB 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vec_unpack (operands, false/*unsigned_p*/, false/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_unpacku_hi_<mode>"
+  [(match_operand:<VDMODE> 0 "register_operand")
+   (match_operand:ILSX_WHB 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vec_unpack (operands, true/*unsigned_p*/, true/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_unpacku_lo_<mode>"
+  [(match_operand:<VDMODE> 0 "register_operand")
+   (match_operand:ILSX_WHB 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vec_unpack (operands, true/*unsigned_p*/, false/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_extract<mode><unitmode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:ILSX 1 "register_operand")
+   (match_operand 2 "const_<indeximm>_operand")]
+  "ISA_HAS_LSX"
+{
+  if (<UNITMODE>mode == QImode || <UNITMODE>mode == HImode)
+    {
+      rtx dest1 = gen_reg_rtx (SImode);
+      emit_insn (gen_lsx_vpickve2gr_<lsxfmt> (dest1, operands[1], operands[2]));
+      emit_move_insn (operands[0],
+		      gen_lowpart (<UNITMODE>mode, dest1));
+    }
+  else
+    emit_insn (gen_lsx_vpickve2gr_<lsxfmt> (operands[0], operands[1], operands[2]));
+  DONE;
+})
+
+(define_expand "vec_extract<mode><unitmode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:FLSX 1 "register_operand")
+   (match_operand 2 "const_<indeximm>_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx temp;
+  HOST_WIDE_INT val = INTVAL (operands[2]);
+
+  if (val == 0)
+    temp = operands[1];
+  else
+    {
+      rtx n = GEN_INT (val * GET_MODE_SIZE (<UNITMODE>mode));
+      temp = gen_reg_rtx (<MODE>mode);
+      emit_insn (gen_lsx_vbsrl_<lsxfmt_f> (temp, operands[1], n));
+    }
+  emit_insn (gen_lsx_vec_extract_<lsxfmt_f> (operands[0], temp));
+  DONE;
+})
+
+(define_insn_and_split "lsx_vec_extract_<lsxfmt_f>"
+  [(set (match_operand:<UNITMODE> 0 "register_operand" "=f")
+	(vec_select:<UNITMODE>
+	  (match_operand:FLSX 1 "register_operand" "f")
+	  (parallel [(const_int 0)])))]
+  "ISA_HAS_LSX"
+  "#"
+  "&& reload_completed"
+  [(set (match_dup 0) (match_dup 1))]
+{
+  operands[1] = gen_rtx_REG (<UNITMODE>mode, REGNO (operands[1]));
+}
+  [(set_attr "move_type" "fmove")
+   (set_attr "mode" "<UNITMODE>")])
+
+(define_expand "vec_set<mode>"
+  [(match_operand:ILSX 0 "register_operand")
+   (match_operand:<UNITMODE> 1 "reg_or_0_operand")
+   (match_operand 2 "const_<indeximm>_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx index = GEN_INT (1 << INTVAL (operands[2]));
+  emit_insn (gen_lsx_vinsgr2vr_<lsxfmt> (operands[0], operands[1],
+					 operands[0], index));
+  DONE;
+})
+
+(define_expand "vec_set<mode>"
+  [(match_operand:FLSX 0 "register_operand")
+   (match_operand:<UNITMODE> 1 "register_operand")
+   (match_operand 2 "const_<indeximm>_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx index = GEN_INT (1 << INTVAL (operands[2]));
+  emit_insn (gen_lsx_vextrins_<lsxfmt_f>_scalar (operands[0], operands[1],
+						 operands[0], index));
+  DONE;
+})
+
+(define_expand "vec_cmp<mode><mode_i>"
+  [(set (match_operand:<VIMODE> 0 "register_operand")
+	(match_operator 1 ""
+	  [(match_operand:LSX 2 "register_operand")
+	   (match_operand:LSX 3 "register_operand")]))]
+  "ISA_HAS_LSX"
+{
+  bool ok = loongarch_expand_vec_cmp (operands);
+  gcc_assert (ok);
+  DONE;
+})
+
+(define_expand "vec_cmpu<ILSX:mode><mode_i>"
+  [(set (match_operand:<VIMODE> 0 "register_operand")
+	(match_operator 1 ""
+	  [(match_operand:ILSX 2 "register_operand")
+	   (match_operand:ILSX 3 "register_operand")]))]
+  "ISA_HAS_LSX"
+{
+  bool ok = loongarch_expand_vec_cmp (operands);
+  gcc_assert (ok);
+  DONE;
+})
+
+(define_expand "vcondu<LSX:mode><ILSX:mode>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand:LSX 1 "reg_or_m1_operand")
+   (match_operand:LSX 2 "reg_or_0_operand")
+   (match_operator 3 ""
+     [(match_operand:ILSX 4 "register_operand")
+      (match_operand:ILSX 5 "register_operand")])]
+  "ISA_HAS_LSX
+   && (GET_MODE_NUNITS (<LSX:MODE>mode) == GET_MODE_NUNITS (<ILSX:MODE>mode))"
+{
+  loongarch_expand_vec_cond_expr (<LSX:MODE>mode, <LSX:VIMODE>mode, operands);
+  DONE;
+})
+
+(define_expand "vcond<LSX:mode><LSX_2:mode>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand:LSX 1 "reg_or_m1_operand")
+   (match_operand:LSX 2 "reg_or_0_operand")
+   (match_operator 3 ""
+     [(match_operand:LSX_2 4 "register_operand")
+      (match_operand:LSX_2 5 "register_operand")])]
+  "ISA_HAS_LSX
+   && (GET_MODE_NUNITS (<LSX:MODE>mode) == GET_MODE_NUNITS (<LSX_2:MODE>mode))"
+{
+  loongarch_expand_vec_cond_expr (<LSX:MODE>mode, <LSX:VIMODE>mode, operands);
+  DONE;
+})
+
+(define_expand "vcond_mask_<ILSX:mode><ILSX:mode>"
+  [(match_operand:ILSX 0 "register_operand")
+   (match_operand:ILSX 1 "reg_or_m1_operand")
+   (match_operand:ILSX 2 "reg_or_0_operand")
+   (match_operand:ILSX 3 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vec_cond_mask_expr (<ILSX:MODE>mode,
+				      <ILSX:VIMODE>mode, operands);
+  DONE;
+})
+
+(define_insn "lsx_vinsgr2vr_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(vec_merge:ILSX
+	  (vec_duplicate:ILSX
+	    (match_operand:<UNITMODE> 1 "reg_or_0_operand" "rJ"))
+	  (match_operand:ILSX 2 "register_operand" "0")
+	  (match_operand 3 "const_<bitmask>_operand" "")))]
+  "ISA_HAS_LSX"
+{
+  if (!TARGET_64BIT && (<MODE>mode == V2DImode || <MODE>mode == V2DFmode))
+    return "#";
+  else
+    return "vinsgr2vr.<lsxfmt>\t%w0,%z1,%y3";
+}
+  [(set_attr "type" "simd_insert")
+   (set_attr "mode" "<MODE>")])
+
+(define_split
+  [(set (match_operand:LSX_D 0 "register_operand")
+	(vec_merge:LSX_D
+	  (vec_duplicate:LSX_D
+	    (match_operand:<UNITMODE> 1 "<LSX_D:lsx_d>_operand"))
+	  (match_operand:LSX_D 2 "register_operand")
+	  (match_operand 3 "const_<bitmask>_operand")))]
+  "reload_completed && ISA_HAS_LSX && !TARGET_64BIT"
+  [(const_int 0)]
+{
+  loongarch_split_lsx_insert_d (operands[0], operands[2], operands[3], operands[1]);
+  DONE;
+})
+
+(define_insn "lsx_vextrins_<lsxfmt_f>_internal"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+	(vec_merge:LSX
+	  (vec_duplicate:LSX
+	    (vec_select:<UNITMODE>
+	      (match_operand:LSX 1 "register_operand" "f")
+	      (parallel [(const_int 0)])))
+	  (match_operand:LSX 2 "register_operand" "0")
+	  (match_operand 3 "const_<bitmask>_operand" "")))]
+  "ISA_HAS_LSX"
+  "vextrins.<lsxfmt>\t%w0,%w1,%y3<<4"
+  [(set_attr "type" "simd_insert")
+   (set_attr "mode" "<MODE>")])
+
+;; Operand 3 is a scalar.
+(define_insn "lsx_vextrins_<lsxfmt_f>_scalar"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(vec_merge:FLSX
+	  (vec_duplicate:FLSX
+	    (match_operand:<UNITMODE> 1 "register_operand" "f"))
+	  (match_operand:FLSX 2 "register_operand" "0")
+	  (match_operand 3 "const_<bitmask>_operand" "")))]
+  "ISA_HAS_LSX"
+  "vextrins.<lsxfmt>\t%w0,%w1,%y3<<4"
+  [(set_attr "type" "simd_insert")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vpickve2gr_<lsxfmt><u>"
+  [(set (match_operand:<VRES> 0 "register_operand" "=r")
+	(any_extend:<VRES>
+	  (vec_select:<UNITMODE>
+	    (match_operand:ILSX_HB 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_<indeximm>_operand" "")]))))]
+  "ISA_HAS_LSX"
+  "vpickve2gr.<lsxfmt><u>\t%0,%w1,%2"
+  [(set_attr "type" "simd_copy")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vpickve2gr_<lsxfmt_f><u>"
+  [(set (match_operand:<UNITMODE> 0 "register_operand" "=r")
+	(any_extend:<UNITMODE>
+	  (vec_select:<UNITMODE>
+	    (match_operand:LSX_W 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_<indeximm>_operand" "")]))))]
+  "ISA_HAS_LSX"
+  "vpickve2gr.<lsxfmt><u>\t%0,%w1,%2"
+  [(set_attr "type" "simd_copy")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn_and_split "lsx_vpickve2gr_du"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+	(vec_select:DI
+	  (match_operand:V2DI 1 "register_operand" "f")
+	  (parallel [(match_operand 2 "const_0_or_1_operand" "")])))]
+  "ISA_HAS_LSX"
+{
+  if (TARGET_64BIT)
+    return "vpickve2gr.du\t%0,%w1,%2";
+  else
+    return "#";
+}
+  "reload_completed && ISA_HAS_LSX && !TARGET_64BIT"
+  [(const_int 0)]
+{
+  loongarch_split_lsx_copy_d (operands[0], operands[1], operands[2],
+			      gen_lsx_vpickve2gr_wu);
+  DONE;
+}
+  [(set_attr "type" "simd_copy")
+   (set_attr "mode" "V2DI")])
+
+(define_insn_and_split "lsx_vpickve2gr_<lsxfmt_f>"
+  [(set (match_operand:<UNITMODE> 0 "register_operand" "=r")
+	(vec_select:<UNITMODE>
+	  (match_operand:LSX_D 1 "register_operand" "f")
+	  (parallel [(match_operand 2 "const_<indeximm>_operand" "")])))]
+  "ISA_HAS_LSX"
+{
+  if (TARGET_64BIT)
+    return "vpickve2gr.<lsxfmt>\t%0,%w1,%2";
+  else
+    return "#";
+}
+  "reload_completed && ISA_HAS_LSX && !TARGET_64BIT"
+  [(const_int 0)]
+{
+  loongarch_split_lsx_copy_d (operands[0], operands[1], operands[2],
+			      gen_lsx_vpickve2gr_w);
+  DONE;
+}
+  [(set_attr "type" "simd_copy")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_expand "abs<mode>2"
+  [(match_operand:ILSX 0 "register_operand" "=f")
+   (abs:ILSX (match_operand:ILSX 1 "register_operand" "f"))]
+  "ISA_HAS_LSX"
+{
+  if (ISA_HAS_LSX)
+  {
+    emit_insn (gen_vabs<mode>2 (operands[0], operands[1]));
+    DONE;
+  }
+  else
+  {
+    rtx reg = gen_reg_rtx (<MODE>mode);
+    emit_move_insn (reg, CONST0_RTX (<MODE>mode));
+    emit_insn (gen_lsx_vadda_<lsxfmt> (operands[0], operands[1], reg));
+    DONE;
+  }
+})
+
+(define_expand "neg<mode>2"
+  [(set (match_operand:ILSX 0 "register_operand")
+	(neg:ILSX (match_operand:ILSX 1 "register_operand")))]
+  "ISA_HAS_LSX"
+{
+  emit_insn (gen_vneg<mode>2 (operands[0], operands[1]));
+  DONE;
+})
+
+(define_expand "neg<mode>2"
+  [(set (match_operand:FLSX 0 "register_operand")
+	(neg:FLSX (match_operand:FLSX 1 "register_operand")))]
+  "ISA_HAS_LSX"
+{
+  rtx reg = gen_reg_rtx (<MODE>mode);
+  emit_move_insn (reg, CONST0_RTX (<MODE>mode));
+  emit_insn (gen_sub<mode>3 (operands[0], reg, operands[1]));
+  DONE;
+})
+
+(define_expand "lsx_vrepli<mode>"
+  [(match_operand:ILSX 0 "register_operand")
+   (match_operand 1 "const_imm10_operand")]
+  "ISA_HAS_LSX"
+{
+  if (<MODE>mode == V16QImode)
+    operands[1] = GEN_INT (trunc_int_for_mode (INTVAL (operands[1]),
+					       <UNITMODE>mode));
+  emit_move_insn (operands[0],
+		  loongarch_gen_const_int_vector (<MODE>mode, INTVAL (operands[1])));
+  DONE;
+})
+
+(define_expand "vec_perm<mode>"
+ [(match_operand:LSX 0 "register_operand")
+  (match_operand:LSX 1 "register_operand")
+  (match_operand:LSX 2 "register_operand")
+  (match_operand:LSX 3 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  loongarch_expand_vec_perm (operands[0], operands[1],
+			     operands[2], operands[3]);
+  DONE;
+})
+
+(define_insn "lsx_vshuf_<lsxfmt_f>"
+  [(set (match_operand:LSX_DWH 0 "register_operand" "=f")
+	(unspec:LSX_DWH [(match_operand:LSX_DWH 1 "register_operand" "0")
+			 (match_operand:LSX_DWH 2 "register_operand" "f")
+			 (match_operand:LSX_DWH 3 "register_operand" "f")]
+			UNSPEC_LSX_VSHUF))]
+  "ISA_HAS_LSX"
+  "vshuf.<lsxfmt>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_sld")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "mov<mode>"
+  [(set (match_operand:LSX 0)
+	(match_operand:LSX 1))]
+  "ISA_HAS_LSX"
+{
+  if (loongarch_legitimize_move (<MODE>mode, operands[0], operands[1]))
+    DONE;
+})
+
+(define_expand "movmisalign<mode>"
+  [(set (match_operand:LSX 0)
+	(match_operand:LSX 1))]
+  "ISA_HAS_LSX"
+{
+  if (loongarch_legitimize_move (<MODE>mode, operands[0], operands[1]))
+    DONE;
+})
+
+(define_insn "mov<mode>_lsx"
+  [(set (match_operand:LSX 0 "nonimmediate_operand" "=f,f,R,*r,*f")
+	(match_operand:LSX 1 "move_operand" "fYGYI,R,f,*f,*r"))]
+  "ISA_HAS_LSX"
+{ return loongarch_output_move (operands[0], operands[1]); }
+  [(set_attr "type" "simd_move,simd_load,simd_store,simd_copy,simd_insert")
+   (set_attr "mode" "<MODE>")])
+
+(define_split
+  [(set (match_operand:LSX 0 "nonimmediate_operand")
+	(match_operand:LSX 1 "move_operand"))]
+  "reload_completed && ISA_HAS_LSX
+   && loongarch_split_move_insn_p (operands[0], operands[1])"
+  [(const_int 0)]
+{
+  loongarch_split_move_insn (operands[0], operands[1], curr_insn);
+  DONE;
+})
+
+;; Offset load
+(define_expand "lsx_ld_<lsxfmt_f>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq10<lsxfmt>_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (operands[0], gen_rtx_MEM (<MODE>mode, addr));
+  DONE;
+})
+
+;; Offset store
+(define_expand "lsx_st_<lsxfmt_f>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq10<lsxfmt>_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (gen_rtx_MEM (<MODE>mode, addr), operands[0]);
+  DONE;
+})
+
+;; Integer operations
+(define_insn "add<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f,f")
+	(plus:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_ximm5_operand" "f,Unv5,Uuv5")))]
+  "ISA_HAS_LSX"
+{
+  switch (which_alternative)
+    {
+    case 0:
+      return "vadd.<lsxfmt>\t%w0,%w1,%w2";
+    case 1:
+      {
+	HOST_WIDE_INT val = INTVAL (CONST_VECTOR_ELT (operands[2], 0));
+
+	operands[2] = GEN_INT (-val);
+	return "vsubi.<lsxfmt_u>\t%w0,%w1,%d2";
+      }
+    case 2:
+      return "vaddi.<lsxfmt_u>\t%w0,%w1,%E2";
+    default:
+      gcc_unreachable ();
+    }
+}
+  [(set_attr "alu_type" "simd_add")
+   (set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "sub<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(minus:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_uimm5_operand" "f,Uuv5")))]
+  "ISA_HAS_LSX"
+  "@
+   vsub.<lsxfmt>\t%w0,%w1,%w2
+   vsubi.<lsxfmt_u>\t%w0,%w1,%E2"
+  [(set_attr "alu_type" "simd_add")
+   (set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "mul<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(mult:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		   (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vmul.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_mul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vmadd_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(plus:ILSX (mult:ILSX (match_operand:ILSX 2 "register_operand" "f")
+			      (match_operand:ILSX 3 "register_operand" "f"))
+		   (match_operand:ILSX 1 "register_operand" "0")))]
+  "ISA_HAS_LSX"
+  "vmadd.<lsxfmt>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_mul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vmsub_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(minus:ILSX (match_operand:ILSX 1 "register_operand" "0")
+		    (mult:ILSX (match_operand:ILSX 2 "register_operand" "f")
+			       (match_operand:ILSX 3 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vmsub.<lsxfmt>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_mul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "div<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(div:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		  (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+{ return loongarch_lsx_output_division ("vdiv.<lsxfmt>\t%w0,%w1,%w2", operands); }
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "udiv<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(udiv:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		   (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+{ return loongarch_lsx_output_division ("vdiv.<lsxfmt_u>\t%w0,%w1,%w2", operands); }
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "mod<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(mod:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		  (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+{ return loongarch_lsx_output_division ("vmod.<lsxfmt>\t%w0,%w1,%w2", operands); }
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "umod<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(umod:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		   (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+{ return loongarch_lsx_output_division ("vmod.<lsxfmt_u>\t%w0,%w1,%w2", operands); }
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "xor<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f,f")
+	(xor:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_val_operand" "f,YC,Urv8")))]
+  "ISA_HAS_LSX"
+  "@
+   vxor.v\t%w0,%w1,%w2
+   vbitrevi.%v0\t%w0,%w1,%V2
+   vxori.b\t%w0,%w1,%B2"
+  [(set_attr "type" "simd_logic,simd_bit,simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "ior<mode>3"
+  [(set (match_operand:LSX 0 "register_operand" "=f,f,f")
+	(ior:LSX
+	  (match_operand:LSX 1 "register_operand" "f,f,f")
+	  (match_operand:LSX 2 "reg_or_vector_same_val_operand" "f,YC,Urv8")))]
+  "ISA_HAS_LSX"
+  "@
+   vor.v\t%w0,%w1,%w2
+   vbitseti.%v0\t%w0,%w1,%V2
+   vori.b\t%w0,%w1,%B2"
+  [(set_attr "type" "simd_logic,simd_bit,simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "and<mode>3"
+  [(set (match_operand:LSX 0 "register_operand" "=f,f,f")
+	(and:LSX
+	  (match_operand:LSX 1 "register_operand" "f,f,f")
+	  (match_operand:LSX 2 "reg_or_vector_same_val_operand" "f,YZ,Urv8")))]
+  "ISA_HAS_LSX"
+{
+  switch (which_alternative)
+    {
+    case 0:
+      return "vand.v\t%w0,%w1,%w2";
+    case 1:
+      {
+	rtx elt0 = CONST_VECTOR_ELT (operands[2], 0);
+	unsigned HOST_WIDE_INT val = ~UINTVAL (elt0);
+	operands[2] = loongarch_gen_const_int_vector (<MODE>mode, val & (-val));
+	return "vbitclri.%v0\t%w0,%w1,%V2";
+      }
+    case 2:
+      return "vandi.b\t%w0,%w1,%B2";
+    default:
+      gcc_unreachable ();
+    }
+}
+  [(set_attr "type" "simd_logic,simd_bit,simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "one_cmpl<mode>2"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(not:ILSX (match_operand:ILSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vnor.v\t%w0,%w1,%w1"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "TI")])
+
+(define_insn "vlshr<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(lshiftrt:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_uimm6_operand" "f,Uuv6")))]
+  "ISA_HAS_LSX"
+  "@
+   vsrl.<lsxfmt>\t%w0,%w1,%w2
+   vsrli.<lsxfmt>\t%w0,%w1,%E2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vashr<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(ashiftrt:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_uimm6_operand" "f,Uuv6")))]
+  "ISA_HAS_LSX"
+  "@
+   vsra.<lsxfmt>\t%w0,%w1,%w2
+   vsrai.<lsxfmt>\t%w0,%w1,%E2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vashl<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(ashift:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_uimm6_operand" "f,Uuv6")))]
+  "ISA_HAS_LSX"
+  "@
+   vsll.<lsxfmt>\t%w0,%w1,%w2
+   vslli.<lsxfmt>\t%w0,%w1,%E2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; Floating-point operations
+(define_insn "add<mode>3"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(plus:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		   (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfadd.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "sub<mode>3"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(minus:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		    (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfsub.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "mul<mode>3"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(mult:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		   (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfmul.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fmul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "div<mode>3"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(div:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		  (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfdiv.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fma<mode>4"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(fma:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		  (match_operand:FLSX 2 "register_operand" "f")
+		  (match_operand:FLSX 3 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfmadd.<flsxfmt>\t%w0,%w1,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fnma<mode>4"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(fma:FLSX (neg:FLSX (match_operand:FLSX 1 "register_operand" "f"))
+		  (match_operand:FLSX 2 "register_operand" "f")
+		  (match_operand:FLSX 3 "register_operand" "0")))]
+  "ISA_HAS_LSX"
+  "vfnmsub.<flsxfmt>\t%w0,%w1,%w2,%w0"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "sqrt<mode>2"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(sqrt:FLSX (match_operand:FLSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfsqrt.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+;; Built-in functions
+(define_insn "lsx_vadda_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(plus:ILSX (abs:ILSX (match_operand:ILSX 1 "register_operand" "f"))
+		   (abs:ILSX (match_operand:ILSX 2 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vadda.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "ssadd<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(ss_plus:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vsadd.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "usadd<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(us_plus:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vsadd.<lsxfmt_u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vabsd_s_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(abs:ILSX (minus:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vabsd.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vabsd_u_<lsxfmt_u>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VABSD_U))]
+  "ISA_HAS_LSX"
+  "vabsd.<lsxfmt_u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vavg_s_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VAVG_S))]
+  "ISA_HAS_LSX"
+  "vavg.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vavg_u_<lsxfmt_u>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VAVG_U))]
+  "ISA_HAS_LSX"
+  "vavg.<lsxfmt_u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vavgr_s_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VAVGR_S))]
+  "ISA_HAS_LSX"
+  "vavgr.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vavgr_u_<lsxfmt_u>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VAVGR_U))]
+  "ISA_HAS_LSX"
+  "vavgr.<lsxfmt_u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitclr_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VBITCLR))]
+  "ISA_HAS_LSX"
+  "vbitclr.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitclri_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")]
+		     UNSPEC_LSX_VBITCLRI))]
+  "ISA_HAS_LSX"
+  "vbitclri.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitrev_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VBITREV))]
+  "ISA_HAS_LSX"
+  "vbitrev.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitrevi_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		       (match_operand 2 "const_lsx_branch_operand" "")]
+		     UNSPEC_LSX_VBITREVI))]
+  "ISA_HAS_LSX"
+  "vbitrevi.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitsel_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(ior:ILSX (and:ILSX (not:ILSX
+			      (match_operand:ILSX 3 "register_operand" "f"))
+			    (match_operand:ILSX 1 "register_operand" "f"))
+		  (and:ILSX (match_dup 3)
+			    (match_operand:ILSX 2 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vbitsel.v\t%w0,%w1,%w2,%w3"
+  [(set_attr "type" "simd_bitmov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitseli_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(ior:V16QI (and:V16QI (not:V16QI
+				(match_operand:V16QI 1 "register_operand" "0"))
+			      (match_operand:V16QI 2 "register_operand" "f"))
+		   (and:V16QI (match_dup 1)
+			      (match_operand:V16QI 3 "const_vector_same_val_operand" "Urv8"))))]
+  "ISA_HAS_LSX"
+  "vbitseli.b\t%w0,%w2,%B3"
+  [(set_attr "type" "simd_bitmov")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vbitset_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VBITSET))]
+  "ISA_HAS_LSX"
+  "vbitset.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbitseti_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")]
+		     UNSPEC_LSX_VBITSETI))]
+  "ISA_HAS_LSX"
+  "vbitseti.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_code_iterator ICC [eq le leu lt ltu])
+
+(define_code_attr icc
+  [(eq  "eq")
+   (le  "le")
+   (leu "le")
+   (lt  "lt")
+   (ltu "lt")])
+
+(define_code_attr icci
+  [(eq  "eqi")
+   (le  "lei")
+   (leu "lei")
+   (lt  "lti")
+   (ltu "lti")])
+
+(define_code_attr cmpi
+  [(eq   "s")
+   (le   "s")
+   (leu  "u")
+   (lt   "s")
+   (ltu  "u")])
+
+(define_code_attr cmpi_1
+  [(eq   "")
+   (le   "")
+   (leu  "u")
+   (lt   "")
+   (ltu  "u")])
+
+(define_insn "lsx_vs<ICC:icc>_<ILSX:lsxfmt><ICC:cmpi_1>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(ICC:ILSX
+	  (match_operand:ILSX 1 "register_operand" "f,f")
+	  (match_operand:ILSX 2 "reg_or_vector_same_<ICC:cmpi>imm5_operand" "f,U<ICC:cmpi>v5")))]
+  "ISA_HAS_LSX"
+  "@
+   vs<ICC:icc>.<ILSX:lsxfmt><ICC:cmpi_1>\t%w0,%w1,%w2
+   vs<ICC:icci>.<ILSX:lsxfmt><ICC:cmpi_1>\t%w0,%w1,%E2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfclass_<flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unspec:<VIMODE> [(match_operand:FLSX 1 "register_operand" "f")]
+			 UNSPEC_LSX_VFCLASS))]
+  "ISA_HAS_LSX"
+  "vfclass.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fclass")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfcmp_caf_<flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unspec:<VIMODE> [(match_operand:FLSX 1 "register_operand" "f")
+			  (match_operand:FLSX 2 "register_operand" "f")]
+			 UNSPEC_LSX_VFCMP_CAF))]
+  "ISA_HAS_LSX"
+  "vfcmp.caf.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfcmp_cune_<FLSX:flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unspec:<VIMODE> [(match_operand:FLSX 1 "register_operand" "f")
+			  (match_operand:FLSX 2 "register_operand" "f")]
+			 UNSPEC_LSX_VFCMP_CUNE))]
+  "ISA_HAS_LSX"
+  "vfcmp.cune.<FLSX:flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_code_iterator vfcond [unordered ordered eq ne le lt uneq unle unlt])
+
+(define_code_attr fcc
+  [(unordered "cun")
+   (ordered   "cor")
+   (eq	      "ceq")
+   (ne	      "cne")
+   (uneq      "cueq")
+   (unle      "cule")
+   (unlt      "cult")
+   (le	      "cle")
+   (lt	      "clt")])
+
+(define_int_iterator FSC_UNS [UNSPEC_LSX_VFCMP_SAF UNSPEC_LSX_VFCMP_SUN UNSPEC_LSX_VFCMP_SOR
+			      UNSPEC_LSX_VFCMP_SEQ UNSPEC_LSX_VFCMP_SNE UNSPEC_LSX_VFCMP_SUEQ
+			      UNSPEC_LSX_VFCMP_SUNE UNSPEC_LSX_VFCMP_SULE UNSPEC_LSX_VFCMP_SULT
+			      UNSPEC_LSX_VFCMP_SLE UNSPEC_LSX_VFCMP_SLT])
+
+(define_int_attr fsc
+  [(UNSPEC_LSX_VFCMP_SAF  "saf")
+   (UNSPEC_LSX_VFCMP_SUN  "sun")
+   (UNSPEC_LSX_VFCMP_SOR  "sor")
+   (UNSPEC_LSX_VFCMP_SEQ  "seq")
+   (UNSPEC_LSX_VFCMP_SNE  "sne")
+   (UNSPEC_LSX_VFCMP_SUEQ "sueq")
+   (UNSPEC_LSX_VFCMP_SUNE "sune")
+   (UNSPEC_LSX_VFCMP_SULE "sule")
+   (UNSPEC_LSX_VFCMP_SULT "sult")
+   (UNSPEC_LSX_VFCMP_SLE  "sle")
+   (UNSPEC_LSX_VFCMP_SLT  "slt")])
+
+(define_insn "lsx_vfcmp_<vfcond:fcc>_<FLSX:flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(vfcond:<VIMODE> (match_operand:FLSX 1 "register_operand" "f")
+			 (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfcmp.<vfcond:fcc>.<FLSX:flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfcmp_<fsc>_<FLSX:flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unspec:<VIMODE> [(match_operand:FLSX 1 "register_operand" "f")
+			  (match_operand:FLSX 2 "register_operand" "f")]
+			 FSC_UNS))]
+  "ISA_HAS_LSX"
+  "vfcmp.<fsc>.<FLSX:flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_mode_attr fint
+  [(V4SF "v4si")
+   (V2DF "v2di")])
+
+(define_mode_attr FINTCNV
+  [(V4SF "I2S")
+   (V2DF "I2D")])
+
+(define_mode_attr FINTCNV_2
+  [(V4SF "S2I")
+   (V2DF "D2I")])
+
+(define_insn "float<fint><FLSX:mode>2"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(float:FLSX (match_operand:<VIMODE> 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vffint.<flsxfmt>.<ilsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "floatuns<fint><FLSX:mode>2"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(unsigned_float:FLSX
+	  (match_operand:<VIMODE> 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vffint.<flsxfmt>.<ilsxfmt_u>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV>")
+   (set_attr "mode" "<MODE>")])
+
+(define_mode_attr FFQ
+  [(V4SF "V8HI")
+   (V2DF "V4SI")])
+
+(define_insn "lsx_vreplgr2vr_<lsxfmt_f>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(vec_duplicate:ILSX
+	  (match_operand:<UNITMODE> 1 "reg_or_0_operand" "r,J")))]
+  "ISA_HAS_LSX"
+{
+  if (which_alternative == 1)
+    return "ldi.<lsxfmt>\t%w0,0";
+
+  if (!TARGET_64BIT && (<MODE>mode == V2DImode || <MODE>mode == V2DFmode))
+    return "#";
+  else
+    return "vreplgr2vr.<lsxfmt>\t%w0,%z1";
+}
+  [(set_attr "type" "simd_fill")
+   (set_attr "mode" "<MODE>")])
+
+(define_split
+  [(set (match_operand:LSX_D 0 "register_operand")
+	(vec_duplicate:LSX_D
+	  (match_operand:<UNITMODE> 1 "register_operand")))]
+  "reload_completed && ISA_HAS_LSX && !TARGET_64BIT"
+  [(const_int 0)]
+{
+  loongarch_split_lsx_fill_d (operands[0], operands[1]);
+  DONE;
+})
+
+(define_insn "logb<mode>2"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(unspec:FLSX [(match_operand:FLSX 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFLOGB))]
+  "ISA_HAS_LSX"
+  "vflogb.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_flog2")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "smax<mode>3"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(smax:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		   (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfmax.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfmaxa_<flsxfmt>"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(if_then_else:FLSX
+	   (gt (abs:FLSX (match_operand:FLSX 1 "register_operand" "f"))
+	       (abs:FLSX (match_operand:FLSX 2 "register_operand" "f")))
+	   (match_dup 1)
+	   (match_dup 2)))]
+  "ISA_HAS_LSX"
+  "vfmaxa.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "smin<mode>3"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(smin:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		   (match_operand:FLSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfmin.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfmina_<flsxfmt>"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(if_then_else:FLSX
+	   (lt (abs:FLSX (match_operand:FLSX 1 "register_operand" "f"))
+	       (abs:FLSX (match_operand:FLSX 2 "register_operand" "f")))
+	   (match_dup 1)
+	   (match_dup 2)))]
+  "ISA_HAS_LSX"
+  "vfmina.<flsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfrecip_<flsxfmt>"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(unspec:FLSX [(match_operand:FLSX 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRECIP))]
+  "ISA_HAS_LSX"
+  "vfrecip.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfrint_<flsxfmt>"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(unspec:FLSX [(match_operand:FLSX 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINT))]
+  "ISA_HAS_LSX"
+  "vfrint.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfrsqrt_<flsxfmt>"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(unspec:FLSX [(match_operand:FLSX 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRSQRT))]
+  "ISA_HAS_LSX"
+  "vfrsqrt.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vftint_s_<ilsxfmt>_<flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unspec:<VIMODE> [(match_operand:FLSX 1 "register_operand" "f")]
+			 UNSPEC_LSX_VFTINT_S))]
+  "ISA_HAS_LSX"
+  "vftint.<ilsxfmt>.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV_2>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vftint_u_<ilsxfmt_u>_<flsxfmt>"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unspec:<VIMODE> [(match_operand:FLSX 1 "register_operand" "f")]
+			 UNSPEC_LSX_VFTINT_U))]
+  "ISA_HAS_LSX"
+  "vftint.<ilsxfmt_u>.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV_2>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fix_trunc<FLSX:mode><mode_i>2"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(fix:<VIMODE> (match_operand:FLSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vftintrz.<ilsxfmt>.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV_2>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fixuns_trunc<FLSX:mode><mode_i>2"
+  [(set (match_operand:<VIMODE> 0 "register_operand" "=f")
+	(unsigned_fix:<VIMODE> (match_operand:FLSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vftintrz.<ilsxfmt_u>.<flsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV_2>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vh<optab>w_h<u>_b<u>"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(addsub:V8HI
+	  (any_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 1 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))
+	  (any_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))))]
+  "ISA_HAS_LSX"
+  "vh<optab>w.h<u>.b<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vh<optab>w_w<u>_h<u>"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(addsub:V4SI
+	  (any_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 1 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))
+	  (any_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LSX"
+  "vh<optab>w.w<u>.h<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vh<optab>w_d<u>_w<u>"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(addsub:V2DI
+	  (any_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 1 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)])))
+	  (any_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)])))))]
+  "ISA_HAS_LSX"
+  "vh<optab>w.d<u>.w<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vpackev_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(vec_select:V16QI
+	  (vec_concat:V32QI
+	    (match_operand:V16QI 1 "register_operand" "f")
+	    (match_operand:V16QI 2 "register_operand" "f"))
+	  (parallel [(const_int 0)  (const_int 16)
+		     (const_int 2)  (const_int 18)
+		     (const_int 4)  (const_int 20)
+		     (const_int 6)  (const_int 22)
+		     (const_int 8)  (const_int 24)
+		     (const_int 10) (const_int 26)
+		     (const_int 12) (const_int 28)
+		     (const_int 14) (const_int 30)])))]
+  "ISA_HAS_LSX"
+  "vpackev.b\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vpackev_h"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(vec_select:V8HI
+	  (vec_concat:V16HI
+	    (match_operand:V8HI 1 "register_operand" "f")
+	    (match_operand:V8HI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 8)
+		     (const_int 2) (const_int 10)
+		     (const_int 4) (const_int 12)
+		     (const_int 6) (const_int 14)])))]
+  "ISA_HAS_LSX"
+  "vpackev.h\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vpackev_w"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(vec_select:V4SI
+	  (vec_concat:V8SI
+	    (match_operand:V4SI 1 "register_operand" "f")
+	    (match_operand:V4SI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 4)
+		     (const_int 2) (const_int 6)])))]
+  "ISA_HAS_LSX"
+  "vpackev.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vpackev_w_f"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(vec_select:V4SF
+	  (vec_concat:V8SF
+	    (match_operand:V4SF 1 "register_operand" "f")
+	    (match_operand:V4SF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 4)
+		     (const_int 2) (const_int 6)])))]
+  "ISA_HAS_LSX"
+  "vpackev.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vilvh_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(vec_select:V16QI
+	  (vec_concat:V32QI
+	    (match_operand:V16QI 1 "register_operand" "f")
+	    (match_operand:V16QI 2 "register_operand" "f"))
+	  (parallel [(const_int 8)  (const_int 24)
+		     (const_int 9)  (const_int 25)
+		     (const_int 10) (const_int 26)
+		     (const_int 11) (const_int 27)
+		     (const_int 12) (const_int 28)
+		     (const_int 13) (const_int 29)
+		     (const_int 14) (const_int 30)
+		     (const_int 15) (const_int 31)])))]
+  "ISA_HAS_LSX"
+  "vilvh.b\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vilvh_h"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(vec_select:V8HI
+	  (vec_concat:V16HI
+	    (match_operand:V8HI 1 "register_operand" "f")
+	    (match_operand:V8HI 2 "register_operand" "f"))
+	  (parallel [(const_int 4) (const_int 12)
+		     (const_int 5) (const_int 13)
+		     (const_int 6) (const_int 14)
+		     (const_int 7) (const_int 15)])))]
+  "ISA_HAS_LSX"
+  "vilvh.h\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vilvh_w"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(vec_select:V4SI
+	  (vec_concat:V8SI
+	    (match_operand:V4SI 1 "register_operand" "f")
+	    (match_operand:V4SI 2 "register_operand" "f"))
+	  (parallel [(const_int 2) (const_int 6)
+		     (const_int 3) (const_int 7)])))]
+  "ISA_HAS_LSX"
+  "vilvh.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vilvh_w_f"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(vec_select:V4SF
+	  (vec_concat:V8SF
+	    (match_operand:V4SF 1 "register_operand" "f")
+	    (match_operand:V4SF 2 "register_operand" "f"))
+	  (parallel [(const_int 2) (const_int 6)
+		     (const_int 3) (const_int 7)])))]
+  "ISA_HAS_LSX"
+  "vilvh.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vilvh_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(vec_select:V2DI
+	  (vec_concat:V4DI
+	    (match_operand:V2DI 1 "register_operand" "f")
+	    (match_operand:V2DI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 3)])))]
+  "ISA_HAS_LSX"
+  "vilvh.d\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vilvh_d_f"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(vec_select:V2DF
+	  (vec_concat:V4DF
+	    (match_operand:V2DF 1 "register_operand" "f")
+	    (match_operand:V2DF 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 3)])))]
+  "ISA_HAS_LSX"
+  "vilvh.d\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vpackod_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(vec_select:V16QI
+	  (vec_concat:V32QI
+	    (match_operand:V16QI 1 "register_operand" "f")
+	    (match_operand:V16QI 2 "register_operand" "f"))
+	  (parallel [(const_int 1)  (const_int 17)
+		     (const_int 3)  (const_int 19)
+		     (const_int 5)  (const_int 21)
+		     (const_int 7)  (const_int 23)
+		     (const_int 9)  (const_int 25)
+		     (const_int 11) (const_int 27)
+		     (const_int 13) (const_int 29)
+		     (const_int 15) (const_int 31)])))]
+  "ISA_HAS_LSX"
+  "vpackod.b\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vpackod_h"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(vec_select:V8HI
+	  (vec_concat:V16HI
+	    (match_operand:V8HI 1 "register_operand" "f")
+	    (match_operand:V8HI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 9)
+		     (const_int 3) (const_int 11)
+		     (const_int 5) (const_int 13)
+		     (const_int 7) (const_int 15)])))]
+  "ISA_HAS_LSX"
+  "vpackod.h\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vpackod_w"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(vec_select:V4SI
+	  (vec_concat:V8SI
+	    (match_operand:V4SI 1 "register_operand" "f")
+	    (match_operand:V4SI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 5)
+		     (const_int 3) (const_int 7)])))]
+  "ISA_HAS_LSX"
+  "vpackod.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vpackod_w_f"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(vec_select:V4SF
+	  (vec_concat:V8SF
+	    (match_operand:V4SF 1 "register_operand" "f")
+	    (match_operand:V4SF 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 5)
+		     (const_int 3) (const_int 7)])))]
+  "ISA_HAS_LSX"
+  "vpackod.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vilvl_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(vec_select:V16QI
+	  (vec_concat:V32QI
+	    (match_operand:V16QI 1 "register_operand" "f")
+	    (match_operand:V16QI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 16)
+		     (const_int 1) (const_int 17)
+		     (const_int 2) (const_int 18)
+		     (const_int 3) (const_int 19)
+		     (const_int 4) (const_int 20)
+		     (const_int 5) (const_int 21)
+		     (const_int 6) (const_int 22)
+		     (const_int 7) (const_int 23)])))]
+  "ISA_HAS_LSX"
+  "vilvl.b\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vilvl_h"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(vec_select:V8HI
+	  (vec_concat:V16HI
+	    (match_operand:V8HI 1 "register_operand" "f")
+	    (match_operand:V8HI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 8)
+		     (const_int 1) (const_int 9)
+		     (const_int 2) (const_int 10)
+		     (const_int 3) (const_int 11)])))]
+  "ISA_HAS_LSX"
+  "vilvl.h\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vilvl_w"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(vec_select:V4SI
+	  (vec_concat:V8SI
+	    (match_operand:V4SI 1 "register_operand" "f")
+	    (match_operand:V4SI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 4)
+		     (const_int 1) (const_int 5)])))]
+  "ISA_HAS_LSX"
+  "vilvl.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vilvl_w_f"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(vec_select:V4SF
+	  (vec_concat:V8SF
+	    (match_operand:V4SF 1 "register_operand" "f")
+	    (match_operand:V4SF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 4)
+		     (const_int 1) (const_int 5)])))]
+  "ISA_HAS_LSX"
+  "vilvl.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vilvl_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(vec_select:V2DI
+	  (vec_concat:V4DI
+	    (match_operand:V2DI 1 "register_operand" "f")
+	    (match_operand:V2DI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 2)])))]
+  "ISA_HAS_LSX"
+  "vilvl.d\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vilvl_d_f"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(vec_select:V2DF
+	  (vec_concat:V4DF
+	    (match_operand:V2DF 1 "register_operand" "f")
+	    (match_operand:V2DF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 2)])))]
+  "ISA_HAS_LSX"
+  "vilvl.d\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "smax<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(smax:ILSX (match_operand:ILSX 1 "register_operand" "f,f")
+		   (match_operand:ILSX 2 "reg_or_vector_same_simm5_operand" "f,Usv5")))]
+  "ISA_HAS_LSX"
+  "@
+   vmax.<lsxfmt>\t%w0,%w1,%w2
+   vmaxi.<lsxfmt>\t%w0,%w1,%E2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "umax<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(umax:ILSX (match_operand:ILSX 1 "register_operand" "f,f")
+		   (match_operand:ILSX 2 "reg_or_vector_same_uimm5_operand" "f,Uuv5")))]
+  "ISA_HAS_LSX"
+  "@
+   vmax.<lsxfmt_u>\t%w0,%w1,%w2
+   vmaxi.<lsxfmt_u>\t%w0,%w1,%B2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "smin<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(smin:ILSX (match_operand:ILSX 1 "register_operand" "f,f")
+		   (match_operand:ILSX 2 "reg_or_vector_same_simm5_operand" "f,Usv5")))]
+  "ISA_HAS_LSX"
+  "@
+   vmin.<lsxfmt>\t%w0,%w1,%w2
+   vmini.<lsxfmt>\t%w0,%w1,%E2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "umin<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(umin:ILSX (match_operand:ILSX 1 "register_operand" "f,f")
+		   (match_operand:ILSX 2 "reg_or_vector_same_uimm5_operand" "f,Uuv5")))]
+  "ISA_HAS_LSX"
+  "@
+   vmin.<lsxfmt_u>\t%w0,%w1,%w2
+   vmini.<lsxfmt_u>\t%w0,%w1,%B2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vclo_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(clz:ILSX (not:ILSX (match_operand:ILSX 1 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vclo.<lsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "clz<mode>2"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(clz:ILSX (match_operand:ILSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vclz.<lsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_nor_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f,f")
+	(and:ILSX (not:ILSX (match_operand:ILSX 1 "register_operand" "f,f"))
+		  (not:ILSX (match_operand:ILSX 2 "reg_or_vector_same_val_operand" "f,Urv8"))))]
+  "ISA_HAS_LSX"
+  "@
+   vnor.v\t%w0,%w1,%w2
+   vnori.b\t%w0,%w1,%B2"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vpickev_b"
+[(set (match_operand:V16QI 0 "register_operand" "=f")
+      (vec_select:V16QI
+	(vec_concat:V32QI
+	  (match_operand:V16QI 1 "register_operand" "f")
+	  (match_operand:V16QI 2 "register_operand" "f"))
+	(parallel [(const_int 0) (const_int 2)
+		   (const_int 4) (const_int 6)
+		   (const_int 8) (const_int 10)
+		   (const_int 12) (const_int 14)
+		   (const_int 16) (const_int 18)
+		   (const_int 20) (const_int 22)
+		   (const_int 24) (const_int 26)
+		   (const_int 28) (const_int 30)])))]
+  "ISA_HAS_LSX"
+  "vpickev.b\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vpickev_h"
+[(set (match_operand:V8HI 0 "register_operand" "=f")
+      (vec_select:V8HI
+	(vec_concat:V16HI
+	  (match_operand:V8HI 1 "register_operand" "f")
+	  (match_operand:V8HI 2 "register_operand" "f"))
+	(parallel [(const_int 0) (const_int 2)
+		   (const_int 4) (const_int 6)
+		   (const_int 8) (const_int 10)
+		   (const_int 12) (const_int 14)])))]
+  "ISA_HAS_LSX"
+  "vpickev.h\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vpickev_w"
+[(set (match_operand:V4SI 0 "register_operand" "=f")
+      (vec_select:V4SI
+	(vec_concat:V8SI
+	  (match_operand:V4SI 1 "register_operand" "f")
+	  (match_operand:V4SI 2 "register_operand" "f"))
+	(parallel [(const_int 0) (const_int 2)
+		   (const_int 4) (const_int 6)])))]
+  "ISA_HAS_LSX"
+  "vpickev.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vpickev_w_f"
+[(set (match_operand:V4SF 0 "register_operand" "=f")
+      (vec_select:V4SF
+	(vec_concat:V8SF
+	  (match_operand:V4SF 1 "register_operand" "f")
+	  (match_operand:V4SF 2 "register_operand" "f"))
+	(parallel [(const_int 0) (const_int 2)
+		   (const_int 4) (const_int 6)])))]
+  "ISA_HAS_LSX"
+  "vpickev.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vpickod_b"
+[(set (match_operand:V16QI 0 "register_operand" "=f")
+      (vec_select:V16QI
+	(vec_concat:V32QI
+	  (match_operand:V16QI 1 "register_operand" "f")
+	  (match_operand:V16QI 2 "register_operand" "f"))
+	(parallel [(const_int 1) (const_int 3)
+		   (const_int 5) (const_int 7)
+		   (const_int 9) (const_int 11)
+		   (const_int 13) (const_int 15)
+		   (const_int 17) (const_int 19)
+		   (const_int 21) (const_int 23)
+		   (const_int 25) (const_int 27)
+		   (const_int 29) (const_int 31)])))]
+  "ISA_HAS_LSX"
+  "vpickod.b\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vpickod_h"
+[(set (match_operand:V8HI 0 "register_operand" "=f")
+      (vec_select:V8HI
+	(vec_concat:V16HI
+	  (match_operand:V8HI 1 "register_operand" "f")
+	  (match_operand:V8HI 2 "register_operand" "f"))
+	(parallel [(const_int 1) (const_int 3)
+		   (const_int 5) (const_int 7)
+		   (const_int 9) (const_int 11)
+		   (const_int 13) (const_int 15)])))]
+  "ISA_HAS_LSX"
+  "vpickod.h\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vpickod_w"
+[(set (match_operand:V4SI 0 "register_operand" "=f")
+      (vec_select:V4SI
+	(vec_concat:V8SI
+	  (match_operand:V4SI 1 "register_operand" "f")
+	  (match_operand:V4SI 2 "register_operand" "f"))
+	(parallel [(const_int 1) (const_int 3)
+		   (const_int 5) (const_int 7)])))]
+  "ISA_HAS_LSX"
+  "vpickod.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vpickod_w_f"
+[(set (match_operand:V4SF 0 "register_operand" "=f")
+      (vec_select:V4SF
+	(vec_concat:V8SF
+	  (match_operand:V4SF 1 "register_operand" "f")
+	  (match_operand:V4SF 2 "register_operand" "f"))
+	(parallel [(const_int 1) (const_int 3)
+		   (const_int 5) (const_int 7)])))]
+  "ISA_HAS_LSX"
+  "vpickod.w\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "popcount<mode>2"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(popcount:ILSX (match_operand:ILSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vpcnt.<lsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_pcnt")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsat_s_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")]
+		     UNSPEC_LSX_VSAT_S))]
+  "ISA_HAS_LSX"
+  "vsat.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_sat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsat_u_<lsxfmt_u>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")]
+		     UNSPEC_LSX_VSAT_U))]
+  "ISA_HAS_LSX"
+  "vsat.<lsxfmt_u>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_sat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vshuf4i_<lsxfmt_f>"
+  [(set (match_operand:LSX_WHB_W 0 "register_operand" "=f")
+	(vec_select:LSX_WHB_W
+	  (match_operand:LSX_WHB_W 1 "register_operand" "f")
+	  (match_operand 2 "par_const_vector_shf_set_operand" "")))]
+  "ISA_HAS_LSX"
+{
+  HOST_WIDE_INT val = 0;
+  unsigned int i;
+
+  /* We convert the selection to an immediate.  */
+  for (i = 0; i < 4; i++)
+    val |= INTVAL (XVECEXP (operands[2], 0, i)) << (2 * i);
+
+  operands[2] = GEN_INT (val);
+  return "vshuf4i.<lsxfmt>\t%w0,%w1,%X2";
+}
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrar_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSRAR))]
+  "ISA_HAS_LSX"
+  "vsrar.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrari_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")]
+		     UNSPEC_LSX_VSRARI))]
+  "ISA_HAS_LSX"
+  "vsrari.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrlr_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSRLR))]
+  "ISA_HAS_LSX"
+  "vsrlr.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrlri_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")]
+		     UNSPEC_LSX_VSRLRI))]
+  "ISA_HAS_LSX"
+  "vsrlri.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssub_s_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(ss_minus:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vssub.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssub_u_<lsxfmt_u>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(us_minus:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vssub.<lsxfmt_u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vreplve_<lsxfmt_f>"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+	(vec_duplicate:LSX
+	  (vec_select:<UNITMODE>
+	    (match_operand:LSX 1 "register_operand" "f")
+	    (parallel [(match_operand:SI 2 "register_operand" "r")]))))]
+  "ISA_HAS_LSX"
+  "vreplve.<lsxfmt>\t%w0,%w1,%z2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vreplvei_<lsxfmt_f>"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+	(vec_duplicate:LSX
+	  (vec_select:<UNITMODE>
+	    (match_operand:LSX 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_<indeximm>_operand" "")]))))]
+  "ISA_HAS_LSX"
+  "vreplvei.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vreplvei_<lsxfmt_f>_scalar"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+      (vec_duplicate:LSX
+	(match_operand:<UNITMODE> 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vreplvei.<lsxfmt>\t%w0,%w1,0"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfcvt_h_s"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(unspec:V8HI [(match_operand:V4SF 1 "register_operand" "f")
+		      (match_operand:V4SF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFCVT))]
+  "ISA_HAS_LSX"
+  "vfcvt.h.s\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vfcvt_s_d"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V2DF 1 "register_operand" "f")
+		      (match_operand:V2DF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFCVT))]
+  "ISA_HAS_LSX"
+  "vfcvt.s.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "vec_pack_trunc_v2df"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(vec_concat:V4SF
+	  (float_truncate:V2SF (match_operand:V2DF 1 "register_operand" "f"))
+	  (float_truncate:V2SF (match_operand:V2DF 2 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vfcvt.s.d\t%w0,%w2,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfcvth_s_h"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V8HI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFCVTH))]
+  "ISA_HAS_LSX"
+  "vfcvth.s.h\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfcvth_d_s"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(float_extend:V2DF
+	(vec_select:V2SF
+	  (match_operand:V4SF 1 "register_operand" "f")
+	  (parallel [(const_int 2) (const_int 3)]))))]
+  "ISA_HAS_LSX"
+  "vfcvth.d.s\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vfcvtl_s_h"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V8HI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFCVTL))]
+  "ISA_HAS_LSX"
+  "vfcvtl.s.h\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfcvtl_d_s"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(float_extend:V2DF
+	(vec_select:V2SF
+	  (match_operand:V4SF 1 "register_operand" "f")
+	  (parallel [(const_int 0) (const_int 1)]))))]
+  "ISA_HAS_LSX"
+  "vfcvtl.d.s\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V2DF")])
+
+(define_code_attr lsxbr
+  [(eq "bz")
+   (ne "bnz")])
+
+(define_code_attr lsxeq_v
+  [(eq "eqz")
+   (ne "nez")])
+
+(define_code_attr lsxne_v
+  [(eq "nez")
+   (ne "eqz")])
+
+(define_code_attr lsxeq
+  [(eq "anyeqz")
+   (ne "allnez")])
+
+(define_code_attr lsxne
+  [(eq "allnez")
+   (ne "anyeqz")])
+
+(define_insn "lsx_<lsxbr>_<lsxfmt_f>"
+ [(set (pc) (if_then_else
+	      (equality_op
+		(unspec:SI [(match_operand:LSX 1 "register_operand" "f")]
+			    UNSPEC_LSX_BRANCH)
+		  (match_operand:SI 2 "const_0_operand"))
+		  (label_ref (match_operand 0))
+		  (pc)))
+      (clobber (match_scratch:FCC 3 "=z"))]
+ "ISA_HAS_LSX"
+{
+  return loongarch_output_conditional_branch (insn, operands,
+					 "vset<lsxeq>.<lsxfmt>\t%Z3%w1\n\tbcnez\t%Z3%0",
+					 "vset<lsxne>.<lsxfmt>\t%Z3%w1\n\tbcnez\t%Z3%0");
+}
+ [(set_attr "type" "simd_branch")
+  (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_<lsxbr>_v_<lsxfmt_f>"
+ [(set (pc) (if_then_else
+	      (equality_op
+		(unspec:SI [(match_operand:LSX 1 "register_operand" "f")]
+			    UNSPEC_LSX_BRANCH_V)
+		  (match_operand:SI 2 "const_0_operand"))
+		  (label_ref (match_operand 0))
+		  (pc)))
+      (clobber (match_scratch:FCC 3 "=z"))]
+ "ISA_HAS_LSX"
+{
+  return loongarch_output_conditional_branch (insn, operands,
+					 "vset<lsxeq_v>.v\t%Z3%w1\n\tbcnez\t%Z3%0",
+					 "vset<lsxne_v>.v\t%Z3%w1\n\tbcnez\t%Z3%0");
+}
+ [(set_attr "type" "simd_branch")
+  (set_attr "mode" "TI")])
+
+;; vec_concate
+(define_expand "vec_concatv2di"
+  [(set (match_operand:V2DI 0 "register_operand")
+	(vec_concat:V2DI
+	  (match_operand:DI 1 "register_operand")
+	  (match_operand:DI 2 "register_operand")))]
+  "ISA_HAS_LSX"
+{
+  emit_insn (gen_lsx_vinsgr2vr_d (operands[0], operands[1],
+				  operands[0], GEN_INT (0)));
+  emit_insn (gen_lsx_vinsgr2vr_d (operands[0], operands[2],
+				  operands[0], GEN_INT (1)));
+  DONE;
+})
+
+
+(define_insn "vandn<mode>3"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+	(and:LSX (not:LSX (match_operand:LSX 1 "register_operand" "f"))
+		 (match_operand:LSX 2 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vandn.v\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vabs<mode>2"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(abs:ILSX (match_operand:ILSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vsigncov.<lsxfmt>\t%w0,%w1,%w1"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vneg<mode>2"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(neg:ILSX (match_operand:ILSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vneg.<lsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vmuh_s_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMUH_S))]
+  "ISA_HAS_LSX"
+  "vmuh.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vmuh_u_<lsxfmt_u>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMUH_U))]
+  "ISA_HAS_LSX"
+  "vmuh.<lsxfmt_u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vextw_s_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VEXTW_S))]
+  "ISA_HAS_LSX"
+  "vextw_s.d\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vextw_u_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VEXTW_U))]
+  "ISA_HAS_LSX"
+  "vextw_u.d\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vsllwil_s_<dlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VDMODE> 0 "register_operand" "=f")
+	(unspec:<VDMODE> [(match_operand:ILSX_WHB 1 "register_operand" "f")
+			  (match_operand 2 "const_<bitimm>_operand" "")]
+			 UNSPEC_LSX_VSLLWIL_S))]
+  "ISA_HAS_LSX"
+  "vsllwil.<dlsxfmt>.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsllwil_u_<dlsxfmt_u>_<lsxfmt_u>"
+  [(set (match_operand:<VDMODE> 0 "register_operand" "=f")
+	(unspec:<VDMODE> [(match_operand:ILSX_WHB 1 "register_operand" "f")
+			  (match_operand 2 "const_<bitimm>_operand" "")]
+			 UNSPEC_LSX_VSLLWIL_U))]
+  "ISA_HAS_LSX"
+  "vsllwil.<dlsxfmt_u>.<lsxfmt_u>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsran_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSRAN))]
+  "ISA_HAS_LSX"
+  "vsran.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssran_s_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRAN_S))]
+  "ISA_HAS_LSX"
+  "vssran.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssran_u_<hlsxfmt_u>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRAN_U))]
+  "ISA_HAS_LSX"
+  "vssran.<hlsxfmt_u>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrain_<hlsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand 2 "const_<bitimm>_operand" "")]
+			 UNSPEC_LSX_VSRAIN))]
+  "ISA_HAS_LSX"
+  "vsrain.<hlsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; FIXME: bitimm
+(define_insn "lsx_vsrains_s_<hlsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand 2 "const_<bitimm>_operand" "")]
+			 UNSPEC_LSX_VSRAINS_S))]
+  "ISA_HAS_LSX"
+  "vsrains_s.<hlsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; FIXME: bitimm
+(define_insn "lsx_vsrains_u_<hlsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand 2 "const_<bitimm>_operand" "")]
+			 UNSPEC_LSX_VSRAINS_U))]
+  "ISA_HAS_LSX"
+  "vsrains_u.<hlsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrarn_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSRARN))]
+  "ISA_HAS_LSX"
+  "vsrarn.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrarn_s_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRARN_S))]
+  "ISA_HAS_LSX"
+  "vssrarn.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrarn_u_<hlsxfmt_u>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRARN_U))]
+  "ISA_HAS_LSX"
+  "vssrarn.<hlsxfmt_u>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrln_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSRLN))]
+  "ISA_HAS_LSX"
+  "vsrln.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrln_u_<hlsxfmt_u>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRLN_U))]
+  "ISA_HAS_LSX"
+  "vssrln.<hlsxfmt_u>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrlrn_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSRLRN))]
+  "ISA_HAS_LSX"
+  "vsrlrn.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrlrn_u_<hlsxfmt_u>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRLRN_U))]
+  "ISA_HAS_LSX"
+  "vssrlrn.<hlsxfmt_u>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfrstpi_<lsxfmt>"
+  [(set (match_operand:ILSX_HB 0 "register_operand" "=f")
+	(unspec:ILSX_HB [(match_operand:ILSX_HB 1 "register_operand" "0")
+			 (match_operand:ILSX_HB 2 "register_operand" "f")
+			 (match_operand 3 "const_uimm5_operand" "")]
+			UNSPEC_LSX_VFRSTPI))]
+  "ISA_HAS_LSX"
+  "vfrstpi.<lsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vfrstp_<lsxfmt>"
+  [(set (match_operand:ILSX_HB 0 "register_operand" "=f")
+	(unspec:ILSX_HB [(match_operand:ILSX_HB 1 "register_operand" "0")
+			 (match_operand:ILSX_HB 2 "register_operand" "f")
+			 (match_operand:ILSX_HB 3 "register_operand" "f")]
+			UNSPEC_LSX_VFRSTP))]
+  "ISA_HAS_LSX"
+  "vfrstp.<lsxfmt>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vshuf4i_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand")]
+		     UNSPEC_LSX_VSHUF4I))]
+  "ISA_HAS_LSX"
+  "vshuf4i.d\t%w0,%w2,%3"
+  [(set_attr "type" "simd_sld")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vbsrl_<lsxfmt_f>"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+	(unspec:LSX [(match_operand:LSX 1 "register_operand" "f")
+		     (match_operand 2 "const_uimm5_operand" "")]
+		    UNSPEC_LSX_VBSRL_V))]
+  "ISA_HAS_LSX"
+  "vbsrl.v\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vbsll_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_uimm5_operand" "")]
+		     UNSPEC_LSX_VBSLL_V))]
+  "ISA_HAS_LSX"
+  "vbsll.v\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vextrins_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VEXTRINS))]
+  "ISA_HAS_LSX"
+  "vextrins.<lsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vmskltz_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")]
+		     UNSPEC_LSX_VMSKLTZ))]
+  "ISA_HAS_LSX"
+  "vmskltz.<lsxfmt>\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsigncov_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSIGNCOV))]
+  "ISA_HAS_LSX"
+  "vsigncov.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "copysign<mode>3"
+  [(set (match_dup 4)
+	(and:FLSX
+	  (not:FLSX (match_dup 3))
+	  (match_operand:FLSX 1 "register_operand")))
+   (set (match_dup 5)
+	(and:FLSX (match_dup 3)
+		  (match_operand:FLSX 2 "register_operand")))
+   (set (match_operand:FLSX 0 "register_operand")
+	(ior:FLSX (match_dup 4) (match_dup 5)))]
+  "ISA_HAS_LSX"
+{
+  operands[3] = loongarch_build_signbit_mask (<MODE>mode, 1, 0);
+
+  operands[4] = gen_reg_rtx (<MODE>mode);
+  operands[5] = gen_reg_rtx (<MODE>mode);
+})
+
+(define_insn "absv2df2"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(abs:V2DF (match_operand:V2DF 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vbitclri.d\t%w0,%w1,63"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "absv4sf2"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(abs:V4SF (match_operand:V4SF 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vbitclri.w\t%w0,%w1,31"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "vfmadd<mode>4"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(fma:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		  (match_operand:FLSX 2 "register_operand" "f")
+		  (match_operand:FLSX 3 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vfmadd.<flsxfmt>\t%w0,%w1,$w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fms<mode>4"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(fma:FLSX (match_operand:FLSX 1 "register_operand" "f")
+		  (match_operand:FLSX 2 "register_operand" "f")
+		  (neg:FLSX (match_operand:FLSX 3 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vfmsub.<flsxfmt>\t%w0,%w1,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vfnmsub<mode>4_nmsub4"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(neg:FLSX
+	  (fma:FLSX
+	    (match_operand:FLSX 1 "register_operand" "f")
+	    (match_operand:FLSX 2 "register_operand" "f")
+	    (neg:FLSX (match_operand:FLSX 3 "register_operand" "f")))))]
+  "ISA_HAS_LSX"
+  "vfnmsub.<flsxfmt>\t%w0,%w1,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "vfnmadd<mode>4_nmadd4"
+  [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(neg:FLSX
+	  (fma:FLSX
+	    (match_operand:FLSX 1 "register_operand" "f")
+	    (match_operand:FLSX 2 "register_operand" "f")
+	    (match_operand:FLSX 3 "register_operand" "f"))))]
+  "ISA_HAS_LSX"
+  "vfnmadd.<flsxfmt>\t%w0,%w1,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vftintrne_w_s"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRNE))]
+  "ISA_HAS_LSX"
+  "vftintrne.w.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrne_l_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRNE))]
+  "ISA_HAS_LSX"
+  "vftintrne.l.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftintrp_w_s"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRP))]
+  "ISA_HAS_LSX"
+  "vftintrp.w.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrp_l_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRP))]
+  "ISA_HAS_LSX"
+  "vftintrp.l.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftintrm_w_s"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRM))]
+  "ISA_HAS_LSX"
+  "vftintrm.w.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrm_l_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRM))]
+  "ISA_HAS_LSX"
+  "vftintrm.l.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftint_w_d"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V2DF 1 "register_operand" "f")
+		      (match_operand:V2DF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINT_W_D))]
+  "ISA_HAS_LSX"
+  "vftint.w.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vffint_s_l"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFFINT_S_L))]
+  "ISA_HAS_LSX"
+  "vffint.s.l\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vftintrz_w_d"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V2DF 1 "register_operand" "f")
+		      (match_operand:V2DF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRZ_W_D))]
+  "ISA_HAS_LSX"
+  "vftintrz.w.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftintrp_w_d"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V2DF 1 "register_operand" "f")
+		      (match_operand:V2DF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRP_W_D))]
+  "ISA_HAS_LSX"
+  "vftintrp.w.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftintrm_w_d"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V2DF 1 "register_operand" "f")
+		      (match_operand:V2DF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRM_W_D))]
+  "ISA_HAS_LSX"
+  "vftintrm.w.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftintrne_w_d"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V2DF 1 "register_operand" "f")
+		      (match_operand:V2DF 2 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRNE_W_D))]
+  "ISA_HAS_LSX"
+  "vftintrne.w.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vftinth_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTH_L_H))]
+  "ISA_HAS_LSX"
+  "vftinth.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintl_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTL_L_S))]
+  "ISA_HAS_LSX"
+  "vftintl.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vffinth_d_w"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V4SI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFFINTH_D_W))]
+  "ISA_HAS_LSX"
+  "vffinth.d.w\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vffintl_d_w"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V4SI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFFINTL_D_W))]
+  "ISA_HAS_LSX"
+  "vffintl.d.w\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vftintrzh_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRZH_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrzh.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrzl_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRZL_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrzl.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrph_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRPH_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrph.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrpl_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRPL_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrpl.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrmh_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRMH_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrmh.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrml_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRML_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrml.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrneh_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRNEH_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrneh.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vftintrnel_l_s"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFTINTRNEL_L_S))]
+  "ISA_HAS_LSX"
+  "vftintrnel.l.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfrintrne_s"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRNE_S))]
+  "ISA_HAS_LSX"
+  "vfrintrne.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfrintrne_d"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRNE_D))]
+  "ISA_HAS_LSX"
+  "vfrintrne.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vfrintrz_s"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRZ_S))]
+  "ISA_HAS_LSX"
+  "vfrintrz.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfrintrz_d"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRZ_D))]
+  "ISA_HAS_LSX"
+  "vfrintrz.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vfrintrp_s"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRP_S))]
+  "ISA_HAS_LSX"
+  "vfrintrp.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfrintrp_d"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRP_D))]
+  "ISA_HAS_LSX"
+  "vfrintrp.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+(define_insn "lsx_vfrintrm_s"
+  [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V4SF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRM_S))]
+  "ISA_HAS_LSX"
+  "vfrintrm.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lsx_vfrintrm_d"
+  [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V2DF 1 "register_operand" "f")]
+		     UNSPEC_LSX_VFRINTRM_D))]
+  "ISA_HAS_LSX"
+  "vfrintrm.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+;; Vector versions of the floating-point frint patterns.
+;; Expands to btrunc, ceil, floor, rint.
+(define_insn "<FRINT_S:frint_pattern_s>v4sf2"
+ [(set (match_operand:V4SF 0 "register_operand" "=f")
+	(unspec:V4SF [(match_operand:V4SF 1 "register_operand" "f")]
+			 FRINT_S))]
+  "ISA_HAS_LSX"
+  "vfrint<FRINT_S:frint_suffix>.s\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "<FRINT_D:frint_pattern_d>v2df2"
+ [(set (match_operand:V2DF 0 "register_operand" "=f")
+	(unspec:V2DF [(match_operand:V2DF 1 "register_operand" "f")]
+			 FRINT_D))]
+  "ISA_HAS_LSX"
+  "vfrint<FRINT_D:frint_suffix>.d\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V2DF")])
+
+;; Expands to round.
+(define_insn "round<mode>2"
+ [(set (match_operand:FLSX 0 "register_operand" "=f")
+	(unspec:FLSX [(match_operand:FLSX 1 "register_operand" "f")]
+			 UNSPEC_LSX_VFRINT))]
+  "ISA_HAS_LSX"
+  "vfrint.<flsxfrint>\t%w0,%w1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; Offset load and broadcast
+(define_expand "lsx_vldrepl_<lsxfmt_f>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq12<lsxfmt>_operand")]
+  "ISA_HAS_LSX"
+{
+  emit_insn (gen_lsx_vldrepl_<lsxfmt_f>_insn
+	     (operands[0], operands[1], operands[2]));
+  DONE;
+})
+
+(define_insn "lsx_vldrepl_<lsxfmt_f>_insn"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+	(vec_duplicate:LSX
+	  (mem:<UNITMODE> (plus:DI (match_operand:DI 1 "register_operand" "r")
+				   (match_operand 2 "aq12<lsxfmt>_operand")))))]
+  "ISA_HAS_LSX"
+{
+    return "vldrepl.<lsxfmt>\t%w0,%1,%2";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+(define_insn "lsx_vldrepl_<lsxfmt_f>_insn_0"
+  [(set (match_operand:LSX 0 "register_operand" "=f")
+    (vec_duplicate:LSX
+      (mem:<UNITMODE> (match_operand:DI 1 "register_operand" "r"))))]
+  "ISA_HAS_LSX"
+{
+    return "vldrepl.<lsxfmt>\t%w0,%1,0";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+;; Offset store by sel
+(define_expand "lsx_vstelm_<lsxfmt_f>"
+  [(match_operand:LSX 0 "register_operand")
+   (match_operand 3 "const_<indeximm>_operand")
+   (match_operand 2 "aq8<lsxfmt>_operand")
+   (match_operand 1 "pmode_register_operand")]
+  "ISA_HAS_LSX"
+{
+  emit_insn (gen_lsx_vstelm_<lsxfmt_f>_insn
+	     (operands[1], operands[2], operands[0], operands[3]));
+  DONE;
+})
+
+(define_insn "lsx_vstelm_<lsxfmt_f>_insn"
+  [(set (mem:<UNITMODE> (plus:DI (match_operand:DI 0 "register_operand" "r")
+				 (match_operand 1 "aq8<lsxfmt>_operand")))
+	(vec_select:<UNITMODE>
+	  (match_operand:LSX 2 "register_operand" "f")
+	  (parallel [(match_operand 3 "const_<indeximm>_operand" "")])))]
+
+  "ISA_HAS_LSX"
+{
+  return "vstelm.<lsxfmt>\t%w2,%0,%1,%3";
+}
+  [(set_attr "type" "simd_store")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+;; Offset is "0"
+(define_insn "lsx_vstelm_<lsxfmt_f>_insn_0"
+  [(set (mem:<UNITMODE> (match_operand:DI 0 "register_operand" "r"))
+    (vec_select:<UNITMODE>
+      (match_operand:LSX 1 "register_operand" "f")
+      (parallel [(match_operand:SI 2 "const_<indeximm>_operand")])))]
+  "ISA_HAS_LSX"
+{
+    return "vstelm.<lsxfmt>\t%w1,%0,0,%2";
+}
+  [(set_attr "type" "simd_store")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+(define_expand "lsx_vld"
+  [(match_operand:V16QI 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq12b_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (operands[0], gen_rtx_MEM (V16QImode, addr));
+  DONE;
+})
+
+(define_expand "lsx_vst"
+  [(match_operand:V16QI 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq12b_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (gen_rtx_MEM (V16QImode, addr), operands[0]);
+  DONE;
+})
+
+(define_insn "lsx_vssrln_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRLN))]
+  "ISA_HAS_LSX"
+  "vssrln.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "lsx_vssrlrn_<hlsxfmt>_<lsxfmt>"
+  [(set (match_operand:<VHMODE> 0 "register_operand" "=f")
+	(unspec:<VHMODE> [(match_operand:ILSX_DWH 1 "register_operand" "f")
+			  (match_operand:ILSX_DWH 2 "register_operand" "f")]
+			 UNSPEC_LSX_VSSRLRN))]
+  "ISA_HAS_LSX"
+  "vssrlrn.<hlsxfmt>.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vorn<mode>3"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(ior:ILSX (not:ILSX (match_operand:ILSX 2 "register_operand" "f"))
+		  (match_operand:ILSX 1 "register_operand" "f")))]
+  "ISA_HAS_LSX"
+  "vorn.v\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vldi"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand 1 "const_imm13_operand")]
+		    UNSPEC_LSX_VLDI))]
+  "ISA_HAS_LSX"
+{
+  HOST_WIDE_INT val = INTVAL (operands[1]);
+  if (val < 0)
+  {
+    HOST_WIDE_INT modeVal = (val & 0xf00) >> 8;
+    if (modeVal < 13)
+      return  "vldi\t%w0,%1";
+    else
+      sorry ("imm13 only support 0000 ~ 1100 in bits 9 ~ 12 when bit '13' is 1");
+    return "#";
+  }
+  else
+    return "vldi\t%w0,%1";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vshuf_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(unspec:V16QI [(match_operand:V16QI 1 "register_operand" "f")
+		       (match_operand:V16QI 2 "register_operand" "f")
+		       (match_operand:V16QI 3 "register_operand" "f")]
+		      UNSPEC_LSX_VSHUF_B))]
+  "ISA_HAS_LSX"
+  "vshuf.b\t%w0,%w1,%w2,%w3"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vldx"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(unspec:V16QI [(match_operand:DI 1 "register_operand" "r")
+		       (match_operand:DI 2 "reg_or_0_operand" "rJ")]
+		      UNSPEC_LSX_VLDX))]
+  "ISA_HAS_LSX"
+{
+  return "vldx\t%w0,%1,%z2";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vstx"
+  [(set (mem:V16QI (plus:DI (match_operand:DI 1 "register_operand" "r")
+			    (match_operand:DI 2 "reg_or_0_operand" "rJ")))
+	(unspec: V16QI [(match_operand:V16QI 0 "register_operand" "f")]
+		      UNSPEC_LSX_VSTX))]
+
+  "ISA_HAS_LSX"
+{
+  return "vstx\t%w0,%1,%z2";
+}
+  [(set_attr "type" "simd_store")
+   (set_attr "mode" "DI")])
+
+(define_insn "lsx_vextl_qu_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VEXTL_QU_DU))]
+  "ISA_HAS_LSX"
+  "vextl.qu.du\t%w0,%w1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vseteqz_v"
+  [(set (match_operand:FCC 0 "register_operand" "=z")
+	(eq:FCC
+	  (unspec:SI [(match_operand:V16QI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VSETEQZ_V)
+	  (match_operand:SI 2 "const_0_operand")))]
+  "ISA_HAS_LSX"
+{
+  return "vseteqz.v\t%0,%1";
+}
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "FCC")])
+
+;; Vector reduction operation
+(define_expand "reduc_plus_scal_v2di"
+  [(match_operand:DI 0 "register_operand")
+   (match_operand:V2DI 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (V2DImode);
+  emit_insn (gen_lsx_vhaddw_q_d (tmp, operands[1], operands[1]));
+  emit_insn (gen_vec_extractv2didi (operands[0], tmp, const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_plus_scal_v4si"
+  [(match_operand:SI 0 "register_operand")
+   (match_operand:V4SI 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (V2DImode);
+  rtx tmp1 = gen_reg_rtx (V2DImode);
+  emit_insn (gen_lsx_vhaddw_d_w (tmp, operands[1], operands[1]));
+  emit_insn (gen_lsx_vhaddw_q_d (tmp1, tmp, tmp));
+  emit_insn (gen_vec_extractv4sisi (operands[0], gen_lowpart (V4SImode,tmp1),
+				    const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_plus_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:FLSX 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_add<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_<optab>_scal_<mode>"
+  [(any_bitwise:<UNITMODE>
+      (match_operand:<UNITMODE> 0 "register_operand")
+      (match_operand:ILSX 1 "register_operand"))]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_<optab><mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_smax_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:LSX 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_smax<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_smin_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:LSX 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_smin<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_umax_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:ILSX 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_umax<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_umin_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:ILSX 1 "register_operand")]
+  "ISA_HAS_LSX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_umin<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_insn "lsx_v<optab>wev_d_w<u>"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(addsubmul:V2DI
+	  (any_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)])))
+	  (any_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wev.d.w<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_v<optab>wev_w_h<u>"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(addsubmul:V4SI
+	  (any_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))
+	  (any_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wev.w.h<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_v<optab>wev_h_b<u>"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(addsubmul:V8HI
+	  (any_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))
+	  (any_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wev.h.b<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_v<optab>wod_d_w<u>"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(addsubmul:V2DI
+	  (any_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)])))
+	  (any_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wod.d.w<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_v<optab>wod_w_h<u>"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(addsubmul:V4SI
+	  (any_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))
+	  (any_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wod.w.h<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_v<optab>wod_h_b<u>"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(addsubmul:V8HI
+	  (any_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))
+	  (any_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wod.h.b<u>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_v<optab>wev_d_wu_w"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(addmul:V2DI
+	  (zero_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)])))
+	  (sign_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wev.d.wu.w\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_v<optab>wev_w_hu_h"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(addmul:V4SI
+	  (zero_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))
+	  (sign_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wev.w.hu.h\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_v<optab>wev_h_bu_b"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(addmul:V8HI
+	  (zero_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))
+	  (sign_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wev.h.bu.b\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_v<optab>wod_d_wu_w"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(addmul:V2DI
+	  (zero_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)])))
+	  (sign_extend:V2DI
+	    (vec_select:V2SI
+	      (match_operand:V4SI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wod.d.wu.w\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_v<optab>wod_w_hu_h"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(addmul:V4SI
+	  (zero_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))
+	  (sign_extend:V4SI
+	    (vec_select:V4HI
+	      (match_operand:V8HI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wod.w.hu.h\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_v<optab>wod_h_bu_b"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(addmul:V8HI
+	  (zero_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))
+	  (sign_extend:V8HI
+	    (vec_select:V8QI
+	      (match_operand:V16QI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))))]
+  "ISA_HAS_LSX"
+  "v<optab>wod.h.bu.b\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vaddwev_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADDWEV))]
+  "ISA_HAS_LSX"
+  "vaddwev.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vaddwev_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADDWEV2))]
+  "ISA_HAS_LSX"
+  "vaddwev.q.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vaddwod_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADDWOD))]
+  "ISA_HAS_LSX"
+  "vaddwod.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vaddwod_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADDWOD2))]
+  "ISA_HAS_LSX"
+  "vaddwod.q.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vsubwev_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSUBWEV))]
+  "ISA_HAS_LSX"
+  "vsubwev.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vsubwev_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSUBWEV2))]
+  "ISA_HAS_LSX"
+  "vsubwev.q.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vsubwod_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSUBWOD))]
+  "ISA_HAS_LSX"
+  "vsubwod.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vsubwod_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSUBWOD2))]
+  "ISA_HAS_LSX"
+  "vsubwod.q.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vaddwev_q_du_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADDWEV3))]
+  "ISA_HAS_LSX"
+  "vaddwev.q.du.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vaddwod_q_du_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADDWOD3))]
+  "ISA_HAS_LSX"
+  "vaddwod.q.du.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmulwev_q_du_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMULWEV3))]
+  "ISA_HAS_LSX"
+  "vmulwev.q.du.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmulwod_q_du_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMULWOD3))]
+  "ISA_HAS_LSX"
+  "vmulwod.q.du.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmulwev_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMULWEV))]
+  "ISA_HAS_LSX"
+  "vmulwev.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmulwev_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMULWEV2))]
+  "ISA_HAS_LSX"
+  "vmulwev.q.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmulwod_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMULWOD))]
+  "ISA_HAS_LSX"
+  "vmulwod.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmulwod_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VMULWOD2))]
+  "ISA_HAS_LSX"
+  "vmulwod.q.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vhaddw_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VHADDW_Q_D))]
+  "ISA_HAS_LSX"
+  "vhaddw.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vhaddw_qu_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VHADDW_QU_DU))]
+  "ISA_HAS_LSX"
+  "vhaddw.qu.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vhsubw_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VHSUBW_Q_D))]
+  "ISA_HAS_LSX"
+  "vhsubw.q.d\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vhsubw_qu_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VHSUBW_QU_DU))]
+  "ISA_HAS_LSX"
+  "vhsubw.qu.du\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwev_d_w<u>"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(plus:V2DI
+	  (match_operand:V2DI 1 "register_operand" "0")
+	  (mult:V2DI
+	    (any_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)])))
+	    (any_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwev.d.w<u>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwev_w_h<u>"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(plus:V4SI
+	  (match_operand:V4SI 1 "register_operand" "0")
+	  (mult:V4SI
+	    (any_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)])))
+	    (any_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwev.w.h<u>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vmaddwev_h_b<u>"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(plus:V8HI
+	  (match_operand:V8HI 1 "register_operand" "0")
+	  (mult:V8HI
+	    (any_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)])))
+	    (any_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwev.h.b<u>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vmaddwod_d_w<u>"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(plus:V2DI
+	  (match_operand:V2DI 1 "register_operand" "0")
+	  (mult:V2DI
+	    (any_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)])))
+	    (any_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwod.d.w<u>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwod_w_h<u>"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(plus:V4SI
+	  (match_operand:V4SI 1 "register_operand" "0")
+	  (mult:V4SI
+	    (any_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)])))
+	    (any_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwod.w.h<u>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vmaddwod_h_b<u>"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(plus:V8HI
+	  (match_operand:V8HI 1 "register_operand" "0")
+	  (mult:V8HI
+	    (any_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)])))
+	    (any_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwod.h.b<u>\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vmaddwev_d_wu_w"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(plus:V2DI
+	  (match_operand:V2DI 1 "register_operand" "0")
+	  (mult:V2DI
+	    (zero_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)])))
+	    (sign_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwev.d.wu.w\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwev_w_hu_h"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(plus:V4SI
+	  (match_operand:V4SI 1 "register_operand" "0")
+	  (mult:V4SI
+	    (zero_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)])))
+	    (sign_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwev.w.hu.h\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vmaddwev_h_bu_b"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(plus:V8HI
+	  (match_operand:V8HI 1 "register_operand" "0")
+	  (mult:V8HI
+	    (zero_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)])))
+	    (sign_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwev.h.bu.b\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vmaddwod_d_wu_w"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(plus:V2DI
+	  (match_operand:V2DI 1 "register_operand" "0")
+	  (mult:V2DI
+	    (zero_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)])))
+	    (sign_extend:V2DI
+	      (vec_select:V2SI
+		(match_operand:V4SI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwod.d.wu.w\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwod_w_hu_h"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(plus:V4SI
+	  (match_operand:V4SI 1 "register_operand" "0")
+	  (mult:V4SI
+	    (zero_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)])))
+	    (sign_extend:V4SI
+	      (vec_select:V4HI
+		(match_operand:V8HI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwod.w.hu.h\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vmaddwod_h_bu_b"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(plus:V8HI
+	  (match_operand:V8HI 1 "register_operand" "0")
+	  (mult:V8HI
+	    (zero_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)])))
+	    (sign_extend:V8HI
+	      (vec_select:V8QI
+		(match_operand:V16QI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)]))))))]
+  "ISA_HAS_LSX"
+  "vmaddwod.h.bu.b\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vmaddwev_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand:V2DI 3 "register_operand" "f")]
+		     UNSPEC_LSX_VMADDWEV))]
+  "ISA_HAS_LSX"
+  "vmaddwev.q.d\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwod_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand:V2DI 3 "register_operand" "f")]
+		     UNSPEC_LSX_VMADDWOD))]
+  "ISA_HAS_LSX"
+  "vmaddwod.q.d\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwev_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand:V2DI 3 "register_operand" "f")]
+		     UNSPEC_LSX_VMADDWEV2))]
+  "ISA_HAS_LSX"
+  "vmaddwev.q.du\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwod_q_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand:V2DI 3 "register_operand" "f")]
+		     UNSPEC_LSX_VMADDWOD2))]
+  "ISA_HAS_LSX"
+  "vmaddwod.q.du\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwev_q_du_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand:V2DI 3 "register_operand" "f")]
+		     UNSPEC_LSX_VMADDWEV3))]
+  "ISA_HAS_LSX"
+  "vmaddwev.q.du.d\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmaddwod_q_du_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0")
+		      (match_operand:V2DI 2 "register_operand" "f")
+		      (match_operand:V2DI 3 "register_operand" "f")]
+		     UNSPEC_LSX_VMADDWOD3))]
+  "ISA_HAS_LSX"
+  "vmaddwod.q.du.d\t%w0,%w2,%w3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vrotr_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand:ILSX 2 "register_operand" "f")]
+		     UNSPEC_LSX_VROTR))]
+  "ISA_HAS_LSX"
+  "vrotr.<lsxfmt>\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vadd_q"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VADD_Q))]
+  "ISA_HAS_LSX"
+  "vadd.q\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vsub_q"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")
+		      (match_operand:V2DI 2 "register_operand" "f")]
+		     UNSPEC_LSX_VSUB_Q))]
+  "ISA_HAS_LSX"
+  "vsub.q\t%w0,%w1,%w2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vmskgez_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(unspec:V16QI [(match_operand:V16QI 1 "register_operand" "f")]
+		      UNSPEC_LSX_VMSKGEZ))]
+  "ISA_HAS_LSX"
+  "vmskgez.b\t%w0,%w1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vmsknz_b"
+  [(set (match_operand:V16QI 0 "register_operand" "=f")
+	(unspec:V16QI [(match_operand:V16QI 1 "register_operand" "f")]
+		      UNSPEC_LSX_VMSKNZ))]
+  "ISA_HAS_LSX"
+  "vmsknz.b\t%w0,%w1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V16QI")])
+
+(define_insn "lsx_vexth_h<u>_b<u>"
+  [(set (match_operand:V8HI 0 "register_operand" "=f")
+	(any_extend:V8HI
+	  (vec_select:V8QI
+	    (match_operand:V16QI 1 "register_operand" "f")
+	    (parallel [(const_int 8) (const_int 9)
+		       (const_int 10) (const_int 11)
+		       (const_int 12) (const_int 13)
+		       (const_int 14) (const_int 15)]))))]
+  "ISA_HAS_LSX"
+  "vexth.h<u>.b<u>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8HI")])
+
+(define_insn "lsx_vexth_w<u>_h<u>"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(any_extend:V4SI
+	  (vec_select:V4HI
+	    (match_operand:V8HI 1 "register_operand" "f")
+	    (parallel [(const_int 4) (const_int 5)
+		       (const_int 6) (const_int 7)]))))]
+  "ISA_HAS_LSX"
+  "vexth.w<u>.h<u>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4SI")])
+
+(define_insn "lsx_vexth_d<u>_w<u>"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(any_extend:V2DI
+	  (vec_select:V2SI
+	    (match_operand:V4SI 1 "register_operand" "f")
+	    (parallel [(const_int 2) (const_int 3)]))))]
+  "ISA_HAS_LSX"
+  "vexth.d<u>.w<u>\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vexth_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VEXTH_Q_D))]
+  "ISA_HAS_LSX"
+  "vexth.q.d\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vexth_qu_du"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VEXTH_QU_DU))]
+  "ISA_HAS_LSX"
+  "vexth.qu.du\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vrotri_<lsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(rotatert:ILSX (match_operand:ILSX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm>_operand" "")))]
+  "ISA_HAS_LSX"
+  "vrotri.<lsxfmt>\t%w0,%w1,%2"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vextl_q_d"
+  [(set (match_operand:V2DI 0 "register_operand" "=f")
+	(unspec:V2DI [(match_operand:V2DI 1 "register_operand" "f")]
+		     UNSPEC_LSX_VEXTL_Q_D))]
+  "ISA_HAS_LSX"
+  "vextl.q.d\t%w0,%w1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lsx_vsrlni_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSRLNI))]
+  "ISA_HAS_LSX"
+  "vsrlni.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrlrni_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSRLRNI))]
+  "ISA_HAS_LSX"
+  "vsrlrni.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrlni_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRLNI))]
+  "ISA_HAS_LSX"
+  "vssrlni.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrlni_<lsxfmt_u>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRLNI2))]
+  "ISA_HAS_LSX"
+  "vssrlni.<lsxfmt_u>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrlrni_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRLRNI))]
+  "ISA_HAS_LSX"
+  "vssrlrni.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrlrni_<lsxfmt_u>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRLRNI2))]
+  "ISA_HAS_LSX"
+  "vssrlrni.<lsxfmt_u>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrani_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSRANI))]
+  "ISA_HAS_LSX"
+  "vsrani.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vsrarni_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSRARNI))]
+  "ISA_HAS_LSX"
+  "vsrarni.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrani_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		    UNSPEC_LSX_VSSRANI))]
+  "ISA_HAS_LSX"
+  "vssrani.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrani_<lsxfmt_u>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRANI2))]
+  "ISA_HAS_LSX"
+  "vssrani.<lsxfmt_u>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrarni_<lsxfmt>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRARNI))]
+  "ISA_HAS_LSX"
+  "vssrarni.<lsxfmt>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vssrarni_<lsxfmt_u>_<dlsxfmt>"
+  [(set (match_operand:ILSX 0 "register_operand" "=f")
+	(unspec:ILSX [(match_operand:ILSX 1 "register_operand" "0")
+		      (match_operand:ILSX 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VSSRARNI2))]
+  "ISA_HAS_LSX"
+  "vssrarni.<lsxfmt_u>.<dlsxfmt>\t%w0,%w2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lsx_vpermi_w"
+  [(set (match_operand:V4SI 0 "register_operand" "=f")
+	(unspec:V4SI [(match_operand:V4SI 1 "register_operand" "0")
+		      (match_operand:V4SI 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand" "")]
+		     UNSPEC_LSX_VPERMI))]
+  "ISA_HAS_LSX"
+  "vpermi.w\t%w0,%w2,%3"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V4SI")])
diff --git a/gcc/config/loongarch/predicates.md b/gcc/config/loongarch/predicates.md
index 510973aa339..f430629825e 100644
--- a/gcc/config/loongarch/predicates.md
+++ b/gcc/config/loongarch/predicates.md
@@ -87,10 +87,42 @@ (define_predicate "const_immalsl_operand"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), 1, 4)")))
 
+(define_predicate "const_lsx_branch_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), -1024, 1023)")))
+
+(define_predicate "const_uimm3_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 7)")))
+
+(define_predicate "const_8_to_11_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 8, 11)")))
+
+(define_predicate "const_12_to_15_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 12, 15)")))
+
+(define_predicate "const_uimm4_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 15)")))
+
 (define_predicate "const_uimm5_operand"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), 0, 31)")))
 
+(define_predicate "const_uimm6_operand"
+  (and (match_code "const_int")
+       (match_test "UIMM6_OPERAND (INTVAL (op))")))
+
+(define_predicate "const_uimm7_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 127)")))
+
+(define_predicate "const_uimm8_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 255)")))
+
 (define_predicate "const_uimm14_operand"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), 0, 16383)")))
@@ -99,10 +131,74 @@ (define_predicate "const_uimm15_operand"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), 0, 32767)")))
 
+(define_predicate "const_imm5_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), -16, 15)")))
+
+(define_predicate "const_imm10_operand"
+  (and (match_code "const_int")
+       (match_test "IMM10_OPERAND (INTVAL (op))")))
+
 (define_predicate "const_imm12_operand"
   (and (match_code "const_int")
        (match_test "IMM12_OPERAND (INTVAL (op))")))
 
+(define_predicate "const_imm13_operand"
+  (and (match_code "const_int")
+       (match_test "IMM13_OPERAND (INTVAL (op))")))
+
+(define_predicate "reg_imm10_operand"
+  (ior (match_operand 0 "const_imm10_operand")
+       (match_operand 0 "register_operand")))
+
+(define_predicate "aq8b_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 8, 0)")))
+
+(define_predicate "aq8h_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 8, 1)")))
+
+(define_predicate "aq8w_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 8, 2)")))
+
+(define_predicate "aq8d_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 8, 3)")))
+
+(define_predicate "aq10b_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 10, 0)")))
+
+(define_predicate "aq10h_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 10, 1)")))
+
+(define_predicate "aq10w_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 10, 2)")))
+
+(define_predicate "aq10d_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 10, 3)")))
+
+(define_predicate "aq12b_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 12, 0)")))
+
+(define_predicate "aq12h_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 11, 1)")))
+
+(define_predicate "aq12w_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 10, 2)")))
+
+(define_predicate "aq12d_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 9, 3)")))
+
 (define_predicate "sle_operand"
   (and (match_code "const_int")
        (match_test "IMM12_OPERAND (INTVAL (op) + 1)")))
@@ -112,29 +208,206 @@ (define_predicate "sleu_operand"
        (match_test "INTVAL (op) + 1 != 0")))
 
 (define_predicate "const_0_operand"
-  (and (match_code "const_int,const_double,const_vector")
+  (and (match_code "const_int,const_wide_int,const_double,const_vector")
        (match_test "op == CONST0_RTX (GET_MODE (op))")))
 
+(define_predicate "const_m1_operand"
+  (and (match_code "const_int,const_wide_int,const_double,const_vector")
+       (match_test "op == CONSTM1_RTX (GET_MODE (op))")))
+
+(define_predicate "reg_or_m1_operand"
+  (ior (match_operand 0 "const_m1_operand")
+       (match_operand 0 "register_operand")))
+
 (define_predicate "reg_or_0_operand"
   (ior (match_operand 0 "const_0_operand")
        (match_operand 0 "register_operand")))
 
 (define_predicate "const_1_operand"
-  (and (match_code "const_int,const_double,const_vector")
+  (and (match_code "const_int,const_wide_int,const_double,const_vector")
        (match_test "op == CONST1_RTX (GET_MODE (op))")))
 
 (define_predicate "reg_or_1_operand"
   (ior (match_operand 0 "const_1_operand")
        (match_operand 0 "register_operand")))
 
+;; These are used in vec_merge, hence accept bitmask as const_int.
+(define_predicate "const_exp_2_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (exact_log2 (INTVAL (op)), 0, 1)")))
+
+(define_predicate "const_exp_4_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (exact_log2 (INTVAL (op)), 0, 3)")))
+
+(define_predicate "const_exp_8_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (exact_log2 (INTVAL (op)), 0, 7)")))
+
+(define_predicate "const_exp_16_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (exact_log2 (INTVAL (op)), 0, 15)")))
+
+(define_predicate "const_exp_32_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (exact_log2 (INTVAL (op)), 0, 31)")))
+
+;; This is used for indexing into vectors, and hence only accepts const_int.
+(define_predicate "const_0_or_1_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 1)")))
+
+(define_predicate "const_0_to_3_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 3)")))
+
+(define_predicate "const_0_to_7_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 7)")))
+
+(define_predicate "const_2_or_3_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 2, 3)")))
+
+(define_predicate "const_4_to_7_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 4, 7)")))
+
+(define_predicate "const_8_to_15_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 7)")))
+
+(define_predicate "const_16_to_31_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 7)")))
+
+(define_predicate "qi_mask_operand"
+  (and (match_code "const_int")
+       (match_test "UINTVAL (op) == 0xff")))
+
+(define_predicate "hi_mask_operand"
+  (and (match_code "const_int")
+       (match_test "UINTVAL (op) == 0xffff")))
+
 (define_predicate "lu52i_mask_operand"
   (and (match_code "const_int")
        (match_test "UINTVAL (op) == 0xfffffffffffff")))
 
+(define_predicate "si_mask_operand"
+  (and (match_code "const_int")
+       (match_test "UINTVAL (op) == 0xffffffff")))
+
 (define_predicate "low_bitmask_operand"
   (and (match_code "const_int")
        (match_test "low_bitmask_len (mode, INTVAL (op)) > 12")))
 
+(define_predicate "d_operand"
+  (and (match_code "reg")
+       (match_test "GP_REG_P (REGNO (op))")))
+
+(define_predicate "db4_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op) + 1, 4, 0)")))
+
+(define_predicate "db7_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op) + 1, 7, 0)")))
+
+(define_predicate "db8_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op) + 1, 8, 0)")))
+
+(define_predicate "ib3_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op) - 1, 3, 0)")))
+
+(define_predicate "sb4_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 4, 0)")))
+
+(define_predicate "sb5_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 5, 0)")))
+
+(define_predicate "sb8_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 8, 0)")))
+
+(define_predicate "sd8_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_signed_immediate_p (INTVAL (op), 8, 3)")))
+
+(define_predicate "ub4_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 4, 0)")))
+
+(define_predicate "ub8_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 8, 0)")))
+
+(define_predicate "uh4_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 4, 1)")))
+
+(define_predicate "uw4_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 4, 2)")))
+
+(define_predicate "uw5_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 5, 2)")))
+
+(define_predicate "uw6_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 6, 2)")))
+
+(define_predicate "uw8_operand"
+  (and (match_code "const_int")
+       (match_test "loongarch_unsigned_immediate_p (INTVAL (op), 8, 2)")))
+
+(define_predicate "addiur2_operand"
+  (and (match_code "const_int")
+	(ior (match_test "INTVAL (op) == -1")
+	     (match_test "INTVAL (op) == 1")
+	     (match_test "INTVAL (op) == 4")
+	     (match_test "INTVAL (op) == 8")
+	     (match_test "INTVAL (op) == 12")
+	     (match_test "INTVAL (op) == 16")
+	     (match_test "INTVAL (op) == 20")
+	     (match_test "INTVAL (op) == 24"))))
+
+(define_predicate "addiusp_operand"
+  (and (match_code "const_int")
+       (ior (match_test "(IN_RANGE (INTVAL (op), 2, 257))")
+	    (match_test "(IN_RANGE (INTVAL (op), -258, -3))"))))
+
+(define_predicate "andi16_operand"
+  (and (match_code "const_int")
+	(ior (match_test "IN_RANGE (INTVAL (op), 1, 4)")
+	     (match_test "IN_RANGE (INTVAL (op), 7, 8)")
+	     (match_test "IN_RANGE (INTVAL (op), 15, 16)")
+	     (match_test "IN_RANGE (INTVAL (op), 31, 32)")
+	     (match_test "IN_RANGE (INTVAL (op), 63, 64)")
+	     (match_test "INTVAL (op) == 255")
+	     (match_test "INTVAL (op) == 32768")
+	     (match_test "INTVAL (op) == 65535"))))
+
+(define_predicate "movep_src_register"
+  (and (match_code "reg")
+       (ior (match_test ("IN_RANGE (REGNO (op), 2, 3)"))
+	    (match_test ("IN_RANGE (REGNO (op), 16, 20)")))))
+
+(define_predicate "movep_src_operand"
+  (ior (match_operand 0 "const_0_operand")
+       (match_operand 0 "movep_src_register")))
+
+(define_predicate "fcc_reload_operand"
+  (and (match_code "reg,subreg")
+       (match_test "FCC_REG_P (true_regnum (op))")))
+
+(define_predicate "muldiv_target_operand"
+		(match_operand 0 "register_operand"))
+
 (define_predicate "const_call_insn_operand"
   (match_code "const,symbol_ref,label_ref")
 {
@@ -303,3 +576,59 @@ (define_predicate "small_data_pattern"
 (define_predicate "non_volatile_mem_operand"
   (and (match_operand 0 "memory_operand")
        (not (match_test "MEM_VOLATILE_P (op)"))))
+
+(define_predicate "const_vector_same_val_operand"
+  (match_code "const_vector")
+{
+  return loongarch_const_vector_same_val_p (op, mode);
+})
+
+(define_predicate "const_vector_same_simm5_operand"
+  (match_code "const_vector")
+{
+  return loongarch_const_vector_same_int_p (op, mode, -16, 15);
+})
+
+(define_predicate "const_vector_same_uimm5_operand"
+  (match_code "const_vector")
+{
+  return loongarch_const_vector_same_int_p (op, mode, 0, 31);
+})
+
+(define_predicate "const_vector_same_ximm5_operand"
+  (match_code "const_vector")
+{
+  return loongarch_const_vector_same_int_p (op, mode, -31, 31);
+})
+
+(define_predicate "const_vector_same_uimm6_operand"
+  (match_code "const_vector")
+{
+  return loongarch_const_vector_same_int_p (op, mode, 0, 63);
+})
+
+(define_predicate "par_const_vector_shf_set_operand"
+  (match_code "parallel")
+{
+  return loongarch_const_vector_shuffle_set_p (op, mode);
+})
+
+(define_predicate "reg_or_vector_same_val_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "const_vector_same_val_operand")))
+
+(define_predicate "reg_or_vector_same_simm5_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "const_vector_same_simm5_operand")))
+
+(define_predicate "reg_or_vector_same_uimm5_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "const_vector_same_uimm5_operand")))
+
+(define_predicate "reg_or_vector_same_ximm5_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "const_vector_same_ximm5_operand")))
+
+(define_predicate "reg_or_vector_same_uimm6_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "const_vector_same_uimm6_operand")))
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi
index f14dd32b2dc..bd41fd48f80 100644
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -2892,6 +2892,17 @@ as @code{st.w} and @code{ld.w}.
 A signed 12-bit constant (for arithmetic instructions).
 @item K
 An unsigned 12-bit constant (for logic instructions).
+@item M
+A constant that cannot be loaded using @code{lui}, @code{addiu}
+or @code{ori}.
+@item N
+A constant in the range -65535 to -1 (inclusive).
+@item O
+A signed 15-bit constant.
+@item P
+A constant in the range 1 to 65535 (inclusive).
+@item R
+An address that can be used in a non-macro load or store.
 @item ZB
 An address that is held in a general-purpose register.
 The offset is zero.
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 3/8] LoongArch: Added Loongson SX directive builtin function support.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 1/8] LoongArch: Added Loongson SX vector directive compilation framework Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 2/8] LoongArch: Added Loongson SX base instruction support Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 4/8] LoongArch: Added Loongson ASX vector directive compilation framework Chenghui Pan
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/ChangeLog:

	* config.gcc: Export the header file lsxintrin.h.
	* config/loongarch/loongarch-builtins.cc (LARCH_FTYPE_NAME4): Add builtin function support.
	(enum loongarch_builtin_type): Ditto.
	(AVAIL_ALL): Ditto.
	(LARCH_BUILTIN): Ditto.
	(LSX_BUILTIN): Ditto.
	(LSX_BUILTIN_TEST_BRANCH): Ditto.
	(LSX_NO_TARGET_BUILTIN): Ditto.
	(CODE_FOR_lsx_vsadd_b): Ditto.
	(CODE_FOR_lsx_vsadd_h): Ditto.
	(CODE_FOR_lsx_vsadd_w): Ditto.
	(CODE_FOR_lsx_vsadd_d): Ditto.
	(CODE_FOR_lsx_vsadd_bu): Ditto.
	(CODE_FOR_lsx_vsadd_hu): Ditto.
	(CODE_FOR_lsx_vsadd_wu): Ditto.
	(CODE_FOR_lsx_vsadd_du): Ditto.
	(CODE_FOR_lsx_vadd_b): Ditto.
	(CODE_FOR_lsx_vadd_h): Ditto.
	(CODE_FOR_lsx_vadd_w): Ditto.
	(CODE_FOR_lsx_vadd_d): Ditto.
	(CODE_FOR_lsx_vaddi_bu): Ditto.
	(CODE_FOR_lsx_vaddi_hu): Ditto.
	(CODE_FOR_lsx_vaddi_wu): Ditto.
	(CODE_FOR_lsx_vaddi_du): Ditto.
	(CODE_FOR_lsx_vand_v): Ditto.
	(CODE_FOR_lsx_vandi_b): Ditto.
	(CODE_FOR_lsx_bnz_v): Ditto.
	(CODE_FOR_lsx_bz_v): Ditto.
	(CODE_FOR_lsx_vbitsel_v): Ditto.
	(CODE_FOR_lsx_vseqi_b): Ditto.
	(CODE_FOR_lsx_vseqi_h): Ditto.
	(CODE_FOR_lsx_vseqi_w): Ditto.
	(CODE_FOR_lsx_vseqi_d): Ditto.
	(CODE_FOR_lsx_vslti_b): Ditto.
	(CODE_FOR_lsx_vslti_h): Ditto.
	(CODE_FOR_lsx_vslti_w): Ditto.
	(CODE_FOR_lsx_vslti_d): Ditto.
	(CODE_FOR_lsx_vslti_bu): Ditto.
	(CODE_FOR_lsx_vslti_hu): Ditto.
	(CODE_FOR_lsx_vslti_wu): Ditto.
	(CODE_FOR_lsx_vslti_du): Ditto.
	(CODE_FOR_lsx_vslei_b): Ditto.
	(CODE_FOR_lsx_vslei_h): Ditto.
	(CODE_FOR_lsx_vslei_w): Ditto.
	(CODE_FOR_lsx_vslei_d): Ditto.
	(CODE_FOR_lsx_vslei_bu): Ditto.
	(CODE_FOR_lsx_vslei_hu): Ditto.
	(CODE_FOR_lsx_vslei_wu): Ditto.
	(CODE_FOR_lsx_vslei_du): Ditto.
	(CODE_FOR_lsx_vdiv_b): Ditto.
	(CODE_FOR_lsx_vdiv_h): Ditto.
	(CODE_FOR_lsx_vdiv_w): Ditto.
	(CODE_FOR_lsx_vdiv_d): Ditto.
	(CODE_FOR_lsx_vdiv_bu): Ditto.
	(CODE_FOR_lsx_vdiv_hu): Ditto.
	(CODE_FOR_lsx_vdiv_wu): Ditto.
	(CODE_FOR_lsx_vdiv_du): Ditto.
	(CODE_FOR_lsx_vfadd_s): Ditto.
	(CODE_FOR_lsx_vfadd_d): Ditto.
	(CODE_FOR_lsx_vftintrz_w_s): Ditto.
	(CODE_FOR_lsx_vftintrz_l_d): Ditto.
	(CODE_FOR_lsx_vftintrz_wu_s): Ditto.
	(CODE_FOR_lsx_vftintrz_lu_d): Ditto.
	(CODE_FOR_lsx_vffint_s_w): Ditto.
	(CODE_FOR_lsx_vffint_d_l): Ditto.
	(CODE_FOR_lsx_vffint_s_wu): Ditto.
	(CODE_FOR_lsx_vffint_d_lu): Ditto.
	(CODE_FOR_lsx_vfsub_s): Ditto.
	(CODE_FOR_lsx_vfsub_d): Ditto.
	(CODE_FOR_lsx_vfmul_s): Ditto.
	(CODE_FOR_lsx_vfmul_d): Ditto.
	(CODE_FOR_lsx_vfdiv_s): Ditto.
	(CODE_FOR_lsx_vfdiv_d): Ditto.
	(CODE_FOR_lsx_vfmax_s): Ditto.
	(CODE_FOR_lsx_vfmax_d): Ditto.
	(CODE_FOR_lsx_vfmin_s): Ditto.
	(CODE_FOR_lsx_vfmin_d): Ditto.
	(CODE_FOR_lsx_vfsqrt_s): Ditto.
	(CODE_FOR_lsx_vfsqrt_d): Ditto.
	(CODE_FOR_lsx_vflogb_s): Ditto.
	(CODE_FOR_lsx_vflogb_d): Ditto.
	(CODE_FOR_lsx_vmax_b): Ditto.
	(CODE_FOR_lsx_vmax_h): Ditto.
	(CODE_FOR_lsx_vmax_w): Ditto.
	(CODE_FOR_lsx_vmax_d): Ditto.
	(CODE_FOR_lsx_vmaxi_b): Ditto.
	(CODE_FOR_lsx_vmaxi_h): Ditto.
	(CODE_FOR_lsx_vmaxi_w): Ditto.
	(CODE_FOR_lsx_vmaxi_d): Ditto.
	(CODE_FOR_lsx_vmax_bu): Ditto.
	(CODE_FOR_lsx_vmax_hu): Ditto.
	(CODE_FOR_lsx_vmax_wu): Ditto.
	(CODE_FOR_lsx_vmax_du): Ditto.
	(CODE_FOR_lsx_vmaxi_bu): Ditto.
	(CODE_FOR_lsx_vmaxi_hu): Ditto.
	(CODE_FOR_lsx_vmaxi_wu): Ditto.
	(CODE_FOR_lsx_vmaxi_du): Ditto.
	(CODE_FOR_lsx_vmin_b): Ditto.
	(CODE_FOR_lsx_vmin_h): Ditto.
	(CODE_FOR_lsx_vmin_w): Ditto.
	(CODE_FOR_lsx_vmin_d): Ditto.
	(CODE_FOR_lsx_vmini_b): Ditto.
	(CODE_FOR_lsx_vmini_h): Ditto.
	(CODE_FOR_lsx_vmini_w): Ditto.
	(CODE_FOR_lsx_vmini_d): Ditto.
	(CODE_FOR_lsx_vmin_bu): Ditto.
	(CODE_FOR_lsx_vmin_hu): Ditto.
	(CODE_FOR_lsx_vmin_wu): Ditto.
	(CODE_FOR_lsx_vmin_du): Ditto.
	(CODE_FOR_lsx_vmini_bu): Ditto.
	(CODE_FOR_lsx_vmini_hu): Ditto.
	(CODE_FOR_lsx_vmini_wu): Ditto.
	(CODE_FOR_lsx_vmini_du): Ditto.
	(CODE_FOR_lsx_vmod_b): Ditto.
	(CODE_FOR_lsx_vmod_h): Ditto.
	(CODE_FOR_lsx_vmod_w): Ditto.
	(CODE_FOR_lsx_vmod_d): Ditto.
	(CODE_FOR_lsx_vmod_bu): Ditto.
	(CODE_FOR_lsx_vmod_hu): Ditto.
	(CODE_FOR_lsx_vmod_wu): Ditto.
	(CODE_FOR_lsx_vmod_du): Ditto.
	(CODE_FOR_lsx_vmul_b): Ditto.
	(CODE_FOR_lsx_vmul_h): Ditto.
	(CODE_FOR_lsx_vmul_w): Ditto.
	(CODE_FOR_lsx_vmul_d): Ditto.
	(CODE_FOR_lsx_vclz_b): Ditto.
	(CODE_FOR_lsx_vclz_h): Ditto.
	(CODE_FOR_lsx_vclz_w): Ditto.
	(CODE_FOR_lsx_vclz_d): Ditto.
	(CODE_FOR_lsx_vnor_v): Ditto.
	(CODE_FOR_lsx_vor_v): Ditto.
	(CODE_FOR_lsx_vori_b): Ditto.
	(CODE_FOR_lsx_vnori_b): Ditto.
	(CODE_FOR_lsx_vpcnt_b): Ditto.
	(CODE_FOR_lsx_vpcnt_h): Ditto.
	(CODE_FOR_lsx_vpcnt_w): Ditto.
	(CODE_FOR_lsx_vpcnt_d): Ditto.
	(CODE_FOR_lsx_vxor_v): Ditto.
	(CODE_FOR_lsx_vxori_b): Ditto.
	(CODE_FOR_lsx_vsll_b): Ditto.
	(CODE_FOR_lsx_vsll_h): Ditto.
	(CODE_FOR_lsx_vsll_w): Ditto.
	(CODE_FOR_lsx_vsll_d): Ditto.
	(CODE_FOR_lsx_vslli_b): Ditto.
	(CODE_FOR_lsx_vslli_h): Ditto.
	(CODE_FOR_lsx_vslli_w): Ditto.
	(CODE_FOR_lsx_vslli_d): Ditto.
	(CODE_FOR_lsx_vsra_b): Ditto.
	(CODE_FOR_lsx_vsra_h): Ditto.
	(CODE_FOR_lsx_vsra_w): Ditto.
	(CODE_FOR_lsx_vsra_d): Ditto.
	(CODE_FOR_lsx_vsrai_b): Ditto.
	(CODE_FOR_lsx_vsrai_h): Ditto.
	(CODE_FOR_lsx_vsrai_w): Ditto.
	(CODE_FOR_lsx_vsrai_d): Ditto.
	(CODE_FOR_lsx_vsrl_b): Ditto.
	(CODE_FOR_lsx_vsrl_h): Ditto.
	(CODE_FOR_lsx_vsrl_w): Ditto.
	(CODE_FOR_lsx_vsrl_d): Ditto.
	(CODE_FOR_lsx_vsrli_b): Ditto.
	(CODE_FOR_lsx_vsrli_h): Ditto.
	(CODE_FOR_lsx_vsrli_w): Ditto.
	(CODE_FOR_lsx_vsrli_d): Ditto.
	(CODE_FOR_lsx_vsub_b): Ditto.
	(CODE_FOR_lsx_vsub_h): Ditto.
	(CODE_FOR_lsx_vsub_w): Ditto.
	(CODE_FOR_lsx_vsub_d): Ditto.
	(CODE_FOR_lsx_vsubi_bu): Ditto.
	(CODE_FOR_lsx_vsubi_hu): Ditto.
	(CODE_FOR_lsx_vsubi_wu): Ditto.
	(CODE_FOR_lsx_vsubi_du): Ditto.
	(CODE_FOR_lsx_vpackod_d): Ditto.
	(CODE_FOR_lsx_vpackev_d): Ditto.
	(CODE_FOR_lsx_vpickod_d): Ditto.
	(CODE_FOR_lsx_vpickev_d): Ditto.
	(CODE_FOR_lsx_vrepli_b): Ditto.
	(CODE_FOR_lsx_vrepli_h): Ditto.
	(CODE_FOR_lsx_vrepli_w): Ditto.
	(CODE_FOR_lsx_vrepli_d): Ditto.
	(CODE_FOR_lsx_vsat_b): Ditto.
	(CODE_FOR_lsx_vsat_h): Ditto.
	(CODE_FOR_lsx_vsat_w): Ditto.
	(CODE_FOR_lsx_vsat_d): Ditto.
	(CODE_FOR_lsx_vsat_bu): Ditto.
	(CODE_FOR_lsx_vsat_hu): Ditto.
	(CODE_FOR_lsx_vsat_wu): Ditto.
	(CODE_FOR_lsx_vsat_du): Ditto.
	(CODE_FOR_lsx_vavg_b): Ditto.
	(CODE_FOR_lsx_vavg_h): Ditto.
	(CODE_FOR_lsx_vavg_w): Ditto.
	(CODE_FOR_lsx_vavg_d): Ditto.
	(CODE_FOR_lsx_vavg_bu): Ditto.
	(CODE_FOR_lsx_vavg_hu): Ditto.
	(CODE_FOR_lsx_vavg_wu): Ditto.
	(CODE_FOR_lsx_vavg_du): Ditto.
	(CODE_FOR_lsx_vavgr_b): Ditto.
	(CODE_FOR_lsx_vavgr_h): Ditto.
	(CODE_FOR_lsx_vavgr_w): Ditto.
	(CODE_FOR_lsx_vavgr_d): Ditto.
	(CODE_FOR_lsx_vavgr_bu): Ditto.
	(CODE_FOR_lsx_vavgr_hu): Ditto.
	(CODE_FOR_lsx_vavgr_wu): Ditto.
	(CODE_FOR_lsx_vavgr_du): Ditto.
	(CODE_FOR_lsx_vssub_b): Ditto.
	(CODE_FOR_lsx_vssub_h): Ditto.
	(CODE_FOR_lsx_vssub_w): Ditto.
	(CODE_FOR_lsx_vssub_d): Ditto.
	(CODE_FOR_lsx_vssub_bu): Ditto.
	(CODE_FOR_lsx_vssub_hu): Ditto.
	(CODE_FOR_lsx_vssub_wu): Ditto.
	(CODE_FOR_lsx_vssub_du): Ditto.
	(CODE_FOR_lsx_vabsd_b): Ditto.
	(CODE_FOR_lsx_vabsd_h): Ditto.
	(CODE_FOR_lsx_vabsd_w): Ditto.
	(CODE_FOR_lsx_vabsd_d): Ditto.
	(CODE_FOR_lsx_vabsd_bu): Ditto.
	(CODE_FOR_lsx_vabsd_hu): Ditto.
	(CODE_FOR_lsx_vabsd_wu): Ditto.
	(CODE_FOR_lsx_vabsd_du): Ditto.
	(CODE_FOR_lsx_vftint_w_s): Ditto.
	(CODE_FOR_lsx_vftint_l_d): Ditto.
	(CODE_FOR_lsx_vftint_wu_s): Ditto.
	(CODE_FOR_lsx_vftint_lu_d): Ditto.
	(CODE_FOR_lsx_vandn_v): Ditto.
	(CODE_FOR_lsx_vorn_v): Ditto.
	(CODE_FOR_lsx_vneg_b): Ditto.
	(CODE_FOR_lsx_vneg_h): Ditto.
	(CODE_FOR_lsx_vneg_w): Ditto.
	(CODE_FOR_lsx_vneg_d): Ditto.
	(CODE_FOR_lsx_vshuf4i_d): Ditto.
	(CODE_FOR_lsx_vbsrl_v): Ditto.
	(CODE_FOR_lsx_vbsll_v): Ditto.
	(CODE_FOR_lsx_vfmadd_s): Ditto.
	(CODE_FOR_lsx_vfmadd_d): Ditto.
	(CODE_FOR_lsx_vfmsub_s): Ditto.
	(CODE_FOR_lsx_vfmsub_d): Ditto.
	(CODE_FOR_lsx_vfnmadd_s): Ditto.
	(CODE_FOR_lsx_vfnmadd_d): Ditto.
	(CODE_FOR_lsx_vfnmsub_s): Ditto.
	(CODE_FOR_lsx_vfnmsub_d): Ditto.
	(CODE_FOR_lsx_vmuh_b): Ditto.
	(CODE_FOR_lsx_vmuh_h): Ditto.
	(CODE_FOR_lsx_vmuh_w): Ditto.
	(CODE_FOR_lsx_vmuh_d): Ditto.
	(CODE_FOR_lsx_vmuh_bu): Ditto.
	(CODE_FOR_lsx_vmuh_hu): Ditto.
	(CODE_FOR_lsx_vmuh_wu): Ditto.
	(CODE_FOR_lsx_vmuh_du): Ditto.
	(CODE_FOR_lsx_vsllwil_h_b): Ditto.
	(CODE_FOR_lsx_vsllwil_w_h): Ditto.
	(CODE_FOR_lsx_vsllwil_d_w): Ditto.
	(CODE_FOR_lsx_vsllwil_hu_bu): Ditto.
	(CODE_FOR_lsx_vsllwil_wu_hu): Ditto.
	(CODE_FOR_lsx_vsllwil_du_wu): Ditto.
	(CODE_FOR_lsx_vssran_b_h): Ditto.
	(CODE_FOR_lsx_vssran_h_w): Ditto.
	(CODE_FOR_lsx_vssran_w_d): Ditto.
	(CODE_FOR_lsx_vssran_bu_h): Ditto.
	(CODE_FOR_lsx_vssran_hu_w): Ditto.
	(CODE_FOR_lsx_vssran_wu_d): Ditto.
	(CODE_FOR_lsx_vssrarn_b_h): Ditto.
	(CODE_FOR_lsx_vssrarn_h_w): Ditto.
	(CODE_FOR_lsx_vssrarn_w_d): Ditto.
	(CODE_FOR_lsx_vssrarn_bu_h): Ditto.
	(CODE_FOR_lsx_vssrarn_hu_w): Ditto.
	(CODE_FOR_lsx_vssrarn_wu_d): Ditto.
	(CODE_FOR_lsx_vssrln_bu_h): Ditto.
	(CODE_FOR_lsx_vssrln_hu_w): Ditto.
	(CODE_FOR_lsx_vssrln_wu_d): Ditto.
	(CODE_FOR_lsx_vssrlrn_bu_h): Ditto.
	(CODE_FOR_lsx_vssrlrn_hu_w): Ditto.
	(CODE_FOR_lsx_vssrlrn_wu_d): Ditto.
	(loongarch_builtin_vector_type): Ditto.
	(loongarch_build_cvpointer_type): Ditto.
	(LARCH_ATYPE_CVPOINTER): Ditto.
	(LARCH_ATYPE_BOOLEAN): Ditto.
	(LARCH_ATYPE_V2SF): Ditto.
	(LARCH_ATYPE_V2HI): Ditto.
	(LARCH_ATYPE_V2SI): Ditto.
	(LARCH_ATYPE_V4QI): Ditto.
	(LARCH_ATYPE_V4HI): Ditto.
	(LARCH_ATYPE_V8QI): Ditto.
	(LARCH_ATYPE_V2DI): Ditto.
	(LARCH_ATYPE_V4SI): Ditto.
	(LARCH_ATYPE_V8HI): Ditto.
	(LARCH_ATYPE_V16QI): Ditto.
	(LARCH_ATYPE_V2DF): Ditto.
	(LARCH_ATYPE_V4SF): Ditto.
	(LARCH_ATYPE_V4DI): Ditto.
	(LARCH_ATYPE_V8SI): Ditto.
	(LARCH_ATYPE_V16HI): Ditto.
	(LARCH_ATYPE_V32QI): Ditto.
	(LARCH_ATYPE_V4DF): Ditto.
	(LARCH_ATYPE_V8SF): Ditto.
	(LARCH_ATYPE_UV2DI): Ditto.
	(LARCH_ATYPE_UV4SI): Ditto.
	(LARCH_ATYPE_UV8HI): Ditto.
	(LARCH_ATYPE_UV16QI): Ditto.
	(LARCH_ATYPE_UV4DI): Ditto.
	(LARCH_ATYPE_UV8SI): Ditto.
	(LARCH_ATYPE_UV16HI): Ditto.
	(LARCH_ATYPE_UV32QI): Ditto.
	(LARCH_ATYPE_UV2SI): Ditto.
	(LARCH_ATYPE_UV4HI): Ditto.
	(LARCH_ATYPE_UV8QI): Ditto.
	(loongarch_builtin_vectorized_function): Ditto.
	(LARCH_GET_BUILTIN): Ditto.
	(loongarch_expand_builtin_insn): Ditto.
	(loongarch_expand_builtin_lsx_test_branch): Ditto.
	(loongarch_expand_builtin): Ditto.
	* config/loongarch/loongarch-ftypes.def (1): Ditto.
	(2): Ditto.
	(3): Ditto.
	(4): Ditto.
	* config/loongarch/lsxintrin.h: New file.
---
 gcc/config.gcc                             |    2 +-
 gcc/config/loongarch/loongarch-builtins.cc | 1498 +++++-
 gcc/config/loongarch/loongarch-ftypes.def  |  397 +-
 gcc/config/loongarch/lsxintrin.h           | 5181 ++++++++++++++++++++
 4 files changed, 7071 insertions(+), 7 deletions(-)
 create mode 100644 gcc/config/loongarch/lsxintrin.h

diff --git a/gcc/config.gcc b/gcc/config.gcc
index d88071773c9..3aa1d9dd4e6 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -468,7 +468,7 @@ mips*-*-*)
 	;;
 loongarch*-*-*)
 	cpu_type=loongarch
-	extra_headers="larchintrin.h"
+	extra_headers="larchintrin.h lsxintrin.h"
 	extra_objs="loongarch-c.o loongarch-builtins.o loongarch-cpu.o loongarch-opts.o loongarch-def.o"
 	extra_gcc_objs="loongarch-driver.o loongarch-cpu.o loongarch-opts.o loongarch-def.o"
 	extra_options="${extra_options} g.opt fused-madd.opt"
diff --git a/gcc/config/loongarch/loongarch-builtins.cc b/gcc/config/loongarch/loongarch-builtins.cc
index ebe70a986c3..5958f5b7fbe 100644
--- a/gcc/config/loongarch/loongarch-builtins.cc
+++ b/gcc/config/loongarch/loongarch-builtins.cc
@@ -34,14 +34,18 @@ along with GCC; see the file COPYING3.  If not see
 #include "recog.h"
 #include "diagnostic.h"
 #include "fold-const.h"
+#include "explow.h"
 #include "expr.h"
 #include "langhooks.h"
 #include "emit-rtl.h"
+#include "case-cfn-macros.h"
 
 /* Macros to create an enumeration identifier for a function prototype.  */
 #define LARCH_FTYPE_NAME1(A, B) LARCH_##A##_FTYPE_##B
 #define LARCH_FTYPE_NAME2(A, B, C) LARCH_##A##_FTYPE_##B##_##C
 #define LARCH_FTYPE_NAME3(A, B, C, D) LARCH_##A##_FTYPE_##B##_##C##_##D
+#define LARCH_FTYPE_NAME4(A, B, C, D, E) \
+  LARCH_##A##_FTYPE_##B##_##C##_##D##_##E
 
 /* Classifies the prototype of a built-in function.  */
 enum loongarch_function_type
@@ -64,6 +68,12 @@ enum loongarch_builtin_type
      value and the arguments are mapped to operands 0 and above.  */
   LARCH_BUILTIN_DIRECT_NO_TARGET,
 
+  /* For generating LoongArch LSX.  */
+  LARCH_BUILTIN_LSX,
+
+  /* The function corresponds to an LSX conditional branch instruction
+     combined with a compare instruction.  */
+  LARCH_BUILTIN_LSX_TEST_BRANCH,
 };
 
 /* Declare an availability predicate for built-in functions that require
@@ -101,6 +111,7 @@ struct loongarch_builtin_description
 };
 
 AVAIL_ALL (hard_float, TARGET_HARD_FLOAT_ABI)
+AVAIL_ALL (lsx, ISA_HAS_LSX)
 
 /* Construct a loongarch_builtin_description from the given arguments.
 
@@ -120,8 +131,8 @@ AVAIL_ALL (hard_float, TARGET_HARD_FLOAT_ABI)
 #define LARCH_BUILTIN(INSN, NAME, BUILTIN_TYPE, FUNCTION_TYPE, AVAIL) \
   { \
     CODE_FOR_loongarch_##INSN, "__builtin_loongarch_" NAME, \
-      BUILTIN_TYPE, FUNCTION_TYPE, \
-      loongarch_builtin_avail_##AVAIL \
+    BUILTIN_TYPE, FUNCTION_TYPE, \
+    loongarch_builtin_avail_##AVAIL \
   }
 
 /* Define __builtin_loongarch_<INSN>, which is a LARCH_BUILTIN_DIRECT function
@@ -137,6 +148,300 @@ AVAIL_ALL (hard_float, TARGET_HARD_FLOAT_ABI)
   LARCH_BUILTIN (INSN, #INSN, LARCH_BUILTIN_DIRECT_NO_TARGET, \
 		 FUNCTION_TYPE, AVAIL)
 
+/* Define an LSX LARCH_BUILTIN_DIRECT function __builtin_lsx_<INSN>
+   for instruction CODE_FOR_lsx_<INSN>.  FUNCTION_TYPE is a builtin_description
+   field.  */
+#define LSX_BUILTIN(INSN, FUNCTION_TYPE)				\
+  { CODE_FOR_lsx_ ## INSN,						\
+    "__builtin_lsx_" #INSN,  LARCH_BUILTIN_DIRECT,			\
+    FUNCTION_TYPE, loongarch_builtin_avail_lsx }
+
+
+/* Define an LSX LARCH_BUILTIN_LSX_TEST_BRANCH function __builtin_lsx_<INSN>
+   for instruction CODE_FOR_lsx_<INSN>.  FUNCTION_TYPE is a builtin_description
+   field.  */
+#define LSX_BUILTIN_TEST_BRANCH(INSN, FUNCTION_TYPE)			\
+  { CODE_FOR_lsx_ ## INSN,						\
+    "__builtin_lsx_" #INSN, LARCH_BUILTIN_LSX_TEST_BRANCH,		\
+    FUNCTION_TYPE, loongarch_builtin_avail_lsx }
+
+/* Define an LSX LARCH_BUILTIN_DIRECT_NO_TARGET function __builtin_lsx_<INSN>
+   for instruction CODE_FOR_lsx_<INSN>.  FUNCTION_TYPE is a builtin_description
+   field.  */
+#define LSX_NO_TARGET_BUILTIN(INSN, FUNCTION_TYPE)			\
+  { CODE_FOR_lsx_ ## INSN,						\
+    "__builtin_lsx_" #INSN,  LARCH_BUILTIN_DIRECT_NO_TARGET,		\
+    FUNCTION_TYPE, loongarch_builtin_avail_lsx }
+
+/* LoongArch SX define CODE_FOR_lsx_xxx */
+#define CODE_FOR_lsx_vsadd_b CODE_FOR_ssaddv16qi3
+#define CODE_FOR_lsx_vsadd_h CODE_FOR_ssaddv8hi3
+#define CODE_FOR_lsx_vsadd_w CODE_FOR_ssaddv4si3
+#define CODE_FOR_lsx_vsadd_d CODE_FOR_ssaddv2di3
+#define CODE_FOR_lsx_vsadd_bu CODE_FOR_usaddv16qi3
+#define CODE_FOR_lsx_vsadd_hu CODE_FOR_usaddv8hi3
+#define CODE_FOR_lsx_vsadd_wu CODE_FOR_usaddv4si3
+#define CODE_FOR_lsx_vsadd_du CODE_FOR_usaddv2di3
+#define CODE_FOR_lsx_vadd_b CODE_FOR_addv16qi3
+#define CODE_FOR_lsx_vadd_h CODE_FOR_addv8hi3
+#define CODE_FOR_lsx_vadd_w CODE_FOR_addv4si3
+#define CODE_FOR_lsx_vadd_d CODE_FOR_addv2di3
+#define CODE_FOR_lsx_vaddi_bu CODE_FOR_addv16qi3
+#define CODE_FOR_lsx_vaddi_hu CODE_FOR_addv8hi3
+#define CODE_FOR_lsx_vaddi_wu CODE_FOR_addv4si3
+#define CODE_FOR_lsx_vaddi_du CODE_FOR_addv2di3
+#define CODE_FOR_lsx_vand_v CODE_FOR_andv16qi3
+#define CODE_FOR_lsx_vandi_b CODE_FOR_andv16qi3
+#define CODE_FOR_lsx_bnz_v CODE_FOR_lsx_bnz_v_b
+#define CODE_FOR_lsx_bz_v CODE_FOR_lsx_bz_v_b
+#define CODE_FOR_lsx_vbitsel_v CODE_FOR_lsx_vbitsel_b
+#define CODE_FOR_lsx_vseqi_b CODE_FOR_lsx_vseq_b
+#define CODE_FOR_lsx_vseqi_h CODE_FOR_lsx_vseq_h
+#define CODE_FOR_lsx_vseqi_w CODE_FOR_lsx_vseq_w
+#define CODE_FOR_lsx_vseqi_d CODE_FOR_lsx_vseq_d
+#define CODE_FOR_lsx_vslti_b CODE_FOR_lsx_vslt_b
+#define CODE_FOR_lsx_vslti_h CODE_FOR_lsx_vslt_h
+#define CODE_FOR_lsx_vslti_w CODE_FOR_lsx_vslt_w
+#define CODE_FOR_lsx_vslti_d CODE_FOR_lsx_vslt_d
+#define CODE_FOR_lsx_vslti_bu CODE_FOR_lsx_vslt_bu
+#define CODE_FOR_lsx_vslti_hu CODE_FOR_lsx_vslt_hu
+#define CODE_FOR_lsx_vslti_wu CODE_FOR_lsx_vslt_wu
+#define CODE_FOR_lsx_vslti_du CODE_FOR_lsx_vslt_du
+#define CODE_FOR_lsx_vslei_b CODE_FOR_lsx_vsle_b
+#define CODE_FOR_lsx_vslei_h CODE_FOR_lsx_vsle_h
+#define CODE_FOR_lsx_vslei_w CODE_FOR_lsx_vsle_w
+#define CODE_FOR_lsx_vslei_d CODE_FOR_lsx_vsle_d
+#define CODE_FOR_lsx_vslei_bu CODE_FOR_lsx_vsle_bu
+#define CODE_FOR_lsx_vslei_hu CODE_FOR_lsx_vsle_hu
+#define CODE_FOR_lsx_vslei_wu CODE_FOR_lsx_vsle_wu
+#define CODE_FOR_lsx_vslei_du CODE_FOR_lsx_vsle_du
+#define CODE_FOR_lsx_vdiv_b CODE_FOR_divv16qi3
+#define CODE_FOR_lsx_vdiv_h CODE_FOR_divv8hi3
+#define CODE_FOR_lsx_vdiv_w CODE_FOR_divv4si3
+#define CODE_FOR_lsx_vdiv_d CODE_FOR_divv2di3
+#define CODE_FOR_lsx_vdiv_bu CODE_FOR_udivv16qi3
+#define CODE_FOR_lsx_vdiv_hu CODE_FOR_udivv8hi3
+#define CODE_FOR_lsx_vdiv_wu CODE_FOR_udivv4si3
+#define CODE_FOR_lsx_vdiv_du CODE_FOR_udivv2di3
+#define CODE_FOR_lsx_vfadd_s CODE_FOR_addv4sf3
+#define CODE_FOR_lsx_vfadd_d CODE_FOR_addv2df3
+#define CODE_FOR_lsx_vftintrz_w_s CODE_FOR_fix_truncv4sfv4si2
+#define CODE_FOR_lsx_vftintrz_l_d CODE_FOR_fix_truncv2dfv2di2
+#define CODE_FOR_lsx_vftintrz_wu_s CODE_FOR_fixuns_truncv4sfv4si2
+#define CODE_FOR_lsx_vftintrz_lu_d CODE_FOR_fixuns_truncv2dfv2di2
+#define CODE_FOR_lsx_vffint_s_w CODE_FOR_floatv4siv4sf2
+#define CODE_FOR_lsx_vffint_d_l CODE_FOR_floatv2div2df2
+#define CODE_FOR_lsx_vffint_s_wu CODE_FOR_floatunsv4siv4sf2
+#define CODE_FOR_lsx_vffint_d_lu CODE_FOR_floatunsv2div2df2
+#define CODE_FOR_lsx_vfsub_s CODE_FOR_subv4sf3
+#define CODE_FOR_lsx_vfsub_d CODE_FOR_subv2df3
+#define CODE_FOR_lsx_vfmul_s CODE_FOR_mulv4sf3
+#define CODE_FOR_lsx_vfmul_d CODE_FOR_mulv2df3
+#define CODE_FOR_lsx_vfdiv_s CODE_FOR_divv4sf3
+#define CODE_FOR_lsx_vfdiv_d CODE_FOR_divv2df3
+#define CODE_FOR_lsx_vfmax_s CODE_FOR_smaxv4sf3
+#define CODE_FOR_lsx_vfmax_d CODE_FOR_smaxv2df3
+#define CODE_FOR_lsx_vfmin_s CODE_FOR_sminv4sf3
+#define CODE_FOR_lsx_vfmin_d CODE_FOR_sminv2df3
+#define CODE_FOR_lsx_vfsqrt_s CODE_FOR_sqrtv4sf2
+#define CODE_FOR_lsx_vfsqrt_d CODE_FOR_sqrtv2df2
+#define CODE_FOR_lsx_vflogb_s CODE_FOR_logbv4sf2
+#define CODE_FOR_lsx_vflogb_d CODE_FOR_logbv2df2
+#define CODE_FOR_lsx_vmax_b CODE_FOR_smaxv16qi3
+#define CODE_FOR_lsx_vmax_h CODE_FOR_smaxv8hi3
+#define CODE_FOR_lsx_vmax_w CODE_FOR_smaxv4si3
+#define CODE_FOR_lsx_vmax_d CODE_FOR_smaxv2di3
+#define CODE_FOR_lsx_vmaxi_b CODE_FOR_smaxv16qi3
+#define CODE_FOR_lsx_vmaxi_h CODE_FOR_smaxv8hi3
+#define CODE_FOR_lsx_vmaxi_w CODE_FOR_smaxv4si3
+#define CODE_FOR_lsx_vmaxi_d CODE_FOR_smaxv2di3
+#define CODE_FOR_lsx_vmax_bu CODE_FOR_umaxv16qi3
+#define CODE_FOR_lsx_vmax_hu CODE_FOR_umaxv8hi3
+#define CODE_FOR_lsx_vmax_wu CODE_FOR_umaxv4si3
+#define CODE_FOR_lsx_vmax_du CODE_FOR_umaxv2di3
+#define CODE_FOR_lsx_vmaxi_bu CODE_FOR_umaxv16qi3
+#define CODE_FOR_lsx_vmaxi_hu CODE_FOR_umaxv8hi3
+#define CODE_FOR_lsx_vmaxi_wu CODE_FOR_umaxv4si3
+#define CODE_FOR_lsx_vmaxi_du CODE_FOR_umaxv2di3
+#define CODE_FOR_lsx_vmin_b CODE_FOR_sminv16qi3
+#define CODE_FOR_lsx_vmin_h CODE_FOR_sminv8hi3
+#define CODE_FOR_lsx_vmin_w CODE_FOR_sminv4si3
+#define CODE_FOR_lsx_vmin_d CODE_FOR_sminv2di3
+#define CODE_FOR_lsx_vmini_b CODE_FOR_sminv16qi3
+#define CODE_FOR_lsx_vmini_h CODE_FOR_sminv8hi3
+#define CODE_FOR_lsx_vmini_w CODE_FOR_sminv4si3
+#define CODE_FOR_lsx_vmini_d CODE_FOR_sminv2di3
+#define CODE_FOR_lsx_vmin_bu CODE_FOR_uminv16qi3
+#define CODE_FOR_lsx_vmin_hu CODE_FOR_uminv8hi3
+#define CODE_FOR_lsx_vmin_wu CODE_FOR_uminv4si3
+#define CODE_FOR_lsx_vmin_du CODE_FOR_uminv2di3
+#define CODE_FOR_lsx_vmini_bu CODE_FOR_uminv16qi3
+#define CODE_FOR_lsx_vmini_hu CODE_FOR_uminv8hi3
+#define CODE_FOR_lsx_vmini_wu CODE_FOR_uminv4si3
+#define CODE_FOR_lsx_vmini_du CODE_FOR_uminv2di3
+#define CODE_FOR_lsx_vmod_b CODE_FOR_modv16qi3
+#define CODE_FOR_lsx_vmod_h CODE_FOR_modv8hi3
+#define CODE_FOR_lsx_vmod_w CODE_FOR_modv4si3
+#define CODE_FOR_lsx_vmod_d CODE_FOR_modv2di3
+#define CODE_FOR_lsx_vmod_bu CODE_FOR_umodv16qi3
+#define CODE_FOR_lsx_vmod_hu CODE_FOR_umodv8hi3
+#define CODE_FOR_lsx_vmod_wu CODE_FOR_umodv4si3
+#define CODE_FOR_lsx_vmod_du CODE_FOR_umodv2di3
+#define CODE_FOR_lsx_vmul_b CODE_FOR_mulv16qi3
+#define CODE_FOR_lsx_vmul_h CODE_FOR_mulv8hi3
+#define CODE_FOR_lsx_vmul_w CODE_FOR_mulv4si3
+#define CODE_FOR_lsx_vmul_d CODE_FOR_mulv2di3
+#define CODE_FOR_lsx_vclz_b CODE_FOR_clzv16qi2
+#define CODE_FOR_lsx_vclz_h CODE_FOR_clzv8hi2
+#define CODE_FOR_lsx_vclz_w CODE_FOR_clzv4si2
+#define CODE_FOR_lsx_vclz_d CODE_FOR_clzv2di2
+#define CODE_FOR_lsx_vnor_v CODE_FOR_lsx_nor_b
+#define CODE_FOR_lsx_vor_v CODE_FOR_iorv16qi3
+#define CODE_FOR_lsx_vori_b CODE_FOR_iorv16qi3
+#define CODE_FOR_lsx_vnori_b CODE_FOR_lsx_nor_b
+#define CODE_FOR_lsx_vpcnt_b CODE_FOR_popcountv16qi2
+#define CODE_FOR_lsx_vpcnt_h CODE_FOR_popcountv8hi2
+#define CODE_FOR_lsx_vpcnt_w CODE_FOR_popcountv4si2
+#define CODE_FOR_lsx_vpcnt_d CODE_FOR_popcountv2di2
+#define CODE_FOR_lsx_vxor_v CODE_FOR_xorv16qi3
+#define CODE_FOR_lsx_vxori_b CODE_FOR_xorv16qi3
+#define CODE_FOR_lsx_vsll_b CODE_FOR_vashlv16qi3
+#define CODE_FOR_lsx_vsll_h CODE_FOR_vashlv8hi3
+#define CODE_FOR_lsx_vsll_w CODE_FOR_vashlv4si3
+#define CODE_FOR_lsx_vsll_d CODE_FOR_vashlv2di3
+#define CODE_FOR_lsx_vslli_b CODE_FOR_vashlv16qi3
+#define CODE_FOR_lsx_vslli_h CODE_FOR_vashlv8hi3
+#define CODE_FOR_lsx_vslli_w CODE_FOR_vashlv4si3
+#define CODE_FOR_lsx_vslli_d CODE_FOR_vashlv2di3
+#define CODE_FOR_lsx_vsra_b CODE_FOR_vashrv16qi3
+#define CODE_FOR_lsx_vsra_h CODE_FOR_vashrv8hi3
+#define CODE_FOR_lsx_vsra_w CODE_FOR_vashrv4si3
+#define CODE_FOR_lsx_vsra_d CODE_FOR_vashrv2di3
+#define CODE_FOR_lsx_vsrai_b CODE_FOR_vashrv16qi3
+#define CODE_FOR_lsx_vsrai_h CODE_FOR_vashrv8hi3
+#define CODE_FOR_lsx_vsrai_w CODE_FOR_vashrv4si3
+#define CODE_FOR_lsx_vsrai_d CODE_FOR_vashrv2di3
+#define CODE_FOR_lsx_vsrl_b CODE_FOR_vlshrv16qi3
+#define CODE_FOR_lsx_vsrl_h CODE_FOR_vlshrv8hi3
+#define CODE_FOR_lsx_vsrl_w CODE_FOR_vlshrv4si3
+#define CODE_FOR_lsx_vsrl_d CODE_FOR_vlshrv2di3
+#define CODE_FOR_lsx_vsrli_b CODE_FOR_vlshrv16qi3
+#define CODE_FOR_lsx_vsrli_h CODE_FOR_vlshrv8hi3
+#define CODE_FOR_lsx_vsrli_w CODE_FOR_vlshrv4si3
+#define CODE_FOR_lsx_vsrli_d CODE_FOR_vlshrv2di3
+#define CODE_FOR_lsx_vsub_b CODE_FOR_subv16qi3
+#define CODE_FOR_lsx_vsub_h CODE_FOR_subv8hi3
+#define CODE_FOR_lsx_vsub_w CODE_FOR_subv4si3
+#define CODE_FOR_lsx_vsub_d CODE_FOR_subv2di3
+#define CODE_FOR_lsx_vsubi_bu CODE_FOR_subv16qi3
+#define CODE_FOR_lsx_vsubi_hu CODE_FOR_subv8hi3
+#define CODE_FOR_lsx_vsubi_wu CODE_FOR_subv4si3
+#define CODE_FOR_lsx_vsubi_du CODE_FOR_subv2di3
+
+#define CODE_FOR_lsx_vpackod_d CODE_FOR_lsx_vilvh_d
+#define CODE_FOR_lsx_vpackev_d CODE_FOR_lsx_vilvl_d
+#define CODE_FOR_lsx_vpickod_d CODE_FOR_lsx_vilvh_d
+#define CODE_FOR_lsx_vpickev_d CODE_FOR_lsx_vilvl_d
+
+#define CODE_FOR_lsx_vrepli_b CODE_FOR_lsx_vrepliv16qi
+#define CODE_FOR_lsx_vrepli_h CODE_FOR_lsx_vrepliv8hi
+#define CODE_FOR_lsx_vrepli_w CODE_FOR_lsx_vrepliv4si
+#define CODE_FOR_lsx_vrepli_d CODE_FOR_lsx_vrepliv2di
+#define CODE_FOR_lsx_vsat_b CODE_FOR_lsx_vsat_s_b
+#define CODE_FOR_lsx_vsat_h CODE_FOR_lsx_vsat_s_h
+#define CODE_FOR_lsx_vsat_w CODE_FOR_lsx_vsat_s_w
+#define CODE_FOR_lsx_vsat_d CODE_FOR_lsx_vsat_s_d
+#define CODE_FOR_lsx_vsat_bu CODE_FOR_lsx_vsat_u_bu
+#define CODE_FOR_lsx_vsat_hu CODE_FOR_lsx_vsat_u_hu
+#define CODE_FOR_lsx_vsat_wu CODE_FOR_lsx_vsat_u_wu
+#define CODE_FOR_lsx_vsat_du CODE_FOR_lsx_vsat_u_du
+#define CODE_FOR_lsx_vavg_b CODE_FOR_lsx_vavg_s_b
+#define CODE_FOR_lsx_vavg_h CODE_FOR_lsx_vavg_s_h
+#define CODE_FOR_lsx_vavg_w CODE_FOR_lsx_vavg_s_w
+#define CODE_FOR_lsx_vavg_d CODE_FOR_lsx_vavg_s_d
+#define CODE_FOR_lsx_vavg_bu CODE_FOR_lsx_vavg_u_bu
+#define CODE_FOR_lsx_vavg_hu CODE_FOR_lsx_vavg_u_hu
+#define CODE_FOR_lsx_vavg_wu CODE_FOR_lsx_vavg_u_wu
+#define CODE_FOR_lsx_vavg_du CODE_FOR_lsx_vavg_u_du
+#define CODE_FOR_lsx_vavgr_b CODE_FOR_lsx_vavgr_s_b
+#define CODE_FOR_lsx_vavgr_h CODE_FOR_lsx_vavgr_s_h
+#define CODE_FOR_lsx_vavgr_w CODE_FOR_lsx_vavgr_s_w
+#define CODE_FOR_lsx_vavgr_d CODE_FOR_lsx_vavgr_s_d
+#define CODE_FOR_lsx_vavgr_bu CODE_FOR_lsx_vavgr_u_bu
+#define CODE_FOR_lsx_vavgr_hu CODE_FOR_lsx_vavgr_u_hu
+#define CODE_FOR_lsx_vavgr_wu CODE_FOR_lsx_vavgr_u_wu
+#define CODE_FOR_lsx_vavgr_du CODE_FOR_lsx_vavgr_u_du
+#define CODE_FOR_lsx_vssub_b CODE_FOR_lsx_vssub_s_b
+#define CODE_FOR_lsx_vssub_h CODE_FOR_lsx_vssub_s_h
+#define CODE_FOR_lsx_vssub_w CODE_FOR_lsx_vssub_s_w
+#define CODE_FOR_lsx_vssub_d CODE_FOR_lsx_vssub_s_d
+#define CODE_FOR_lsx_vssub_bu CODE_FOR_lsx_vssub_u_bu
+#define CODE_FOR_lsx_vssub_hu CODE_FOR_lsx_vssub_u_hu
+#define CODE_FOR_lsx_vssub_wu CODE_FOR_lsx_vssub_u_wu
+#define CODE_FOR_lsx_vssub_du CODE_FOR_lsx_vssub_u_du
+#define CODE_FOR_lsx_vabsd_b CODE_FOR_lsx_vabsd_s_b
+#define CODE_FOR_lsx_vabsd_h CODE_FOR_lsx_vabsd_s_h
+#define CODE_FOR_lsx_vabsd_w CODE_FOR_lsx_vabsd_s_w
+#define CODE_FOR_lsx_vabsd_d CODE_FOR_lsx_vabsd_s_d
+#define CODE_FOR_lsx_vabsd_bu CODE_FOR_lsx_vabsd_u_bu
+#define CODE_FOR_lsx_vabsd_hu CODE_FOR_lsx_vabsd_u_hu
+#define CODE_FOR_lsx_vabsd_wu CODE_FOR_lsx_vabsd_u_wu
+#define CODE_FOR_lsx_vabsd_du CODE_FOR_lsx_vabsd_u_du
+#define CODE_FOR_lsx_vftint_w_s CODE_FOR_lsx_vftint_s_w_s
+#define CODE_FOR_lsx_vftint_l_d CODE_FOR_lsx_vftint_s_l_d
+#define CODE_FOR_lsx_vftint_wu_s CODE_FOR_lsx_vftint_u_wu_s
+#define CODE_FOR_lsx_vftint_lu_d CODE_FOR_lsx_vftint_u_lu_d
+#define CODE_FOR_lsx_vandn_v CODE_FOR_vandnv16qi3
+#define CODE_FOR_lsx_vorn_v CODE_FOR_vornv16qi3
+#define CODE_FOR_lsx_vneg_b CODE_FOR_vnegv16qi2
+#define CODE_FOR_lsx_vneg_h CODE_FOR_vnegv8hi2
+#define CODE_FOR_lsx_vneg_w CODE_FOR_vnegv4si2
+#define CODE_FOR_lsx_vneg_d CODE_FOR_vnegv2di2
+#define CODE_FOR_lsx_vshuf4i_d CODE_FOR_lsx_vshuf4i_d
+#define CODE_FOR_lsx_vbsrl_v CODE_FOR_lsx_vbsrl_b
+#define CODE_FOR_lsx_vbsll_v CODE_FOR_lsx_vbsll_b
+#define CODE_FOR_lsx_vfmadd_s CODE_FOR_fmav4sf4
+#define CODE_FOR_lsx_vfmadd_d CODE_FOR_fmav2df4
+#define CODE_FOR_lsx_vfmsub_s CODE_FOR_fmsv4sf4
+#define CODE_FOR_lsx_vfmsub_d CODE_FOR_fmsv2df4
+#define CODE_FOR_lsx_vfnmadd_s CODE_FOR_vfnmaddv4sf4_nmadd4
+#define CODE_FOR_lsx_vfnmadd_d CODE_FOR_vfnmaddv2df4_nmadd4
+#define CODE_FOR_lsx_vfnmsub_s CODE_FOR_vfnmsubv4sf4_nmsub4
+#define CODE_FOR_lsx_vfnmsub_d CODE_FOR_vfnmsubv2df4_nmsub4
+
+#define CODE_FOR_lsx_vmuh_b CODE_FOR_lsx_vmuh_s_b
+#define CODE_FOR_lsx_vmuh_h CODE_FOR_lsx_vmuh_s_h
+#define CODE_FOR_lsx_vmuh_w CODE_FOR_lsx_vmuh_s_w
+#define CODE_FOR_lsx_vmuh_d CODE_FOR_lsx_vmuh_s_d
+#define CODE_FOR_lsx_vmuh_bu CODE_FOR_lsx_vmuh_u_bu
+#define CODE_FOR_lsx_vmuh_hu CODE_FOR_lsx_vmuh_u_hu
+#define CODE_FOR_lsx_vmuh_wu CODE_FOR_lsx_vmuh_u_wu
+#define CODE_FOR_lsx_vmuh_du CODE_FOR_lsx_vmuh_u_du
+#define CODE_FOR_lsx_vsllwil_h_b CODE_FOR_lsx_vsllwil_s_h_b
+#define CODE_FOR_lsx_vsllwil_w_h CODE_FOR_lsx_vsllwil_s_w_h
+#define CODE_FOR_lsx_vsllwil_d_w CODE_FOR_lsx_vsllwil_s_d_w
+#define CODE_FOR_lsx_vsllwil_hu_bu CODE_FOR_lsx_vsllwil_u_hu_bu
+#define CODE_FOR_lsx_vsllwil_wu_hu CODE_FOR_lsx_vsllwil_u_wu_hu
+#define CODE_FOR_lsx_vsllwil_du_wu CODE_FOR_lsx_vsllwil_u_du_wu
+#define CODE_FOR_lsx_vssran_b_h CODE_FOR_lsx_vssran_s_b_h
+#define CODE_FOR_lsx_vssran_h_w CODE_FOR_lsx_vssran_s_h_w
+#define CODE_FOR_lsx_vssran_w_d CODE_FOR_lsx_vssran_s_w_d
+#define CODE_FOR_lsx_vssran_bu_h CODE_FOR_lsx_vssran_u_bu_h
+#define CODE_FOR_lsx_vssran_hu_w CODE_FOR_lsx_vssran_u_hu_w
+#define CODE_FOR_lsx_vssran_wu_d CODE_FOR_lsx_vssran_u_wu_d
+#define CODE_FOR_lsx_vssrarn_b_h CODE_FOR_lsx_vssrarn_s_b_h
+#define CODE_FOR_lsx_vssrarn_h_w CODE_FOR_lsx_vssrarn_s_h_w
+#define CODE_FOR_lsx_vssrarn_w_d CODE_FOR_lsx_vssrarn_s_w_d
+#define CODE_FOR_lsx_vssrarn_bu_h CODE_FOR_lsx_vssrarn_u_bu_h
+#define CODE_FOR_lsx_vssrarn_hu_w CODE_FOR_lsx_vssrarn_u_hu_w
+#define CODE_FOR_lsx_vssrarn_wu_d CODE_FOR_lsx_vssrarn_u_wu_d
+#define CODE_FOR_lsx_vssrln_bu_h CODE_FOR_lsx_vssrln_u_bu_h
+#define CODE_FOR_lsx_vssrln_hu_w CODE_FOR_lsx_vssrln_u_hu_w
+#define CODE_FOR_lsx_vssrln_wu_d CODE_FOR_lsx_vssrln_u_wu_d
+#define CODE_FOR_lsx_vssrlrn_bu_h CODE_FOR_lsx_vssrlrn_u_bu_h
+#define CODE_FOR_lsx_vssrlrn_hu_w CODE_FOR_lsx_vssrlrn_u_hu_w
+#define CODE_FOR_lsx_vssrlrn_wu_d CODE_FOR_lsx_vssrlrn_u_wu_d
+
 static const struct loongarch_builtin_description loongarch_builtins[] = {
 #define LARCH_MOVFCSR2GR 0
   DIRECT_BUILTIN (movfcsr2gr, LARCH_USI_FTYPE_UQI, hard_float),
@@ -184,6 +489,727 @@ static const struct loongarch_builtin_description loongarch_builtins[] = {
   DIRECT_NO_TARGET_BUILTIN (asrtgt_d, LARCH_VOID_FTYPE_DI_DI, default),
   DIRECT_NO_TARGET_BUILTIN (syscall, LARCH_VOID_FTYPE_USI, default),
   DIRECT_NO_TARGET_BUILTIN (break, LARCH_VOID_FTYPE_USI, default),
+
+  /* Built-in functions for LSX.  */
+  LSX_BUILTIN (vsll_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsll_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsll_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsll_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vslli_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vslli_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vslli_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vslli_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vsra_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsra_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsra_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsra_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsrai_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsrai_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsrai_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsrai_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vsrar_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsrar_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsrar_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsrar_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsrari_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsrari_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsrari_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsrari_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vsrl_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsrl_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsrl_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsrl_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsrli_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsrli_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsrli_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsrli_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vsrlr_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsrlr_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsrlr_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsrlr_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsrlri_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsrlri_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsrlri_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsrlri_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vbitclr_b, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vbitclr_h, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vbitclr_w, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vbitclr_d, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vbitclri_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vbitclri_h, LARCH_UV8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vbitclri_w, LARCH_UV4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vbitclri_d, LARCH_UV2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vbitset_b, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vbitset_h, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vbitset_w, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vbitset_d, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vbitseti_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vbitseti_h, LARCH_UV8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vbitseti_w, LARCH_UV4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vbitseti_d, LARCH_UV2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vbitrev_b, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vbitrev_h, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vbitrev_w, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vbitrev_d, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vbitrevi_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vbitrevi_h, LARCH_UV8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vbitrevi_w, LARCH_UV4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vbitrevi_d, LARCH_UV2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vadd_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vadd_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vadd_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vadd_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vaddi_bu, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vaddi_hu, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vaddi_wu, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vaddi_du, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vsub_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsub_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsub_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsub_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsubi_bu, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsubi_hu, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsubi_wu, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsubi_du, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vmax_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmax_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmax_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmax_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmaxi_b, LARCH_V16QI_FTYPE_V16QI_QI),
+  LSX_BUILTIN (vmaxi_h, LARCH_V8HI_FTYPE_V8HI_QI),
+  LSX_BUILTIN (vmaxi_w, LARCH_V4SI_FTYPE_V4SI_QI),
+  LSX_BUILTIN (vmaxi_d, LARCH_V2DI_FTYPE_V2DI_QI),
+  LSX_BUILTIN (vmax_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vmax_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vmax_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmax_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vmaxi_bu, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vmaxi_hu, LARCH_UV8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vmaxi_wu, LARCH_UV4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vmaxi_du, LARCH_UV2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vmin_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmin_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmin_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmin_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmini_b, LARCH_V16QI_FTYPE_V16QI_QI),
+  LSX_BUILTIN (vmini_h, LARCH_V8HI_FTYPE_V8HI_QI),
+  LSX_BUILTIN (vmini_w, LARCH_V4SI_FTYPE_V4SI_QI),
+  LSX_BUILTIN (vmini_d, LARCH_V2DI_FTYPE_V2DI_QI),
+  LSX_BUILTIN (vmin_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vmin_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vmin_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmin_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vmini_bu, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vmini_hu, LARCH_UV8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vmini_wu, LARCH_UV4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vmini_du, LARCH_UV2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vseq_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vseq_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vseq_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vseq_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vseqi_b, LARCH_V16QI_FTYPE_V16QI_QI),
+  LSX_BUILTIN (vseqi_h, LARCH_V8HI_FTYPE_V8HI_QI),
+  LSX_BUILTIN (vseqi_w, LARCH_V4SI_FTYPE_V4SI_QI),
+  LSX_BUILTIN (vseqi_d, LARCH_V2DI_FTYPE_V2DI_QI),
+  LSX_BUILTIN (vslti_b, LARCH_V16QI_FTYPE_V16QI_QI),
+  LSX_BUILTIN (vslt_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vslt_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vslt_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vslt_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vslti_h, LARCH_V8HI_FTYPE_V8HI_QI),
+  LSX_BUILTIN (vslti_w, LARCH_V4SI_FTYPE_V4SI_QI),
+  LSX_BUILTIN (vslti_d, LARCH_V2DI_FTYPE_V2DI_QI),
+  LSX_BUILTIN (vslt_bu, LARCH_V16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vslt_hu, LARCH_V8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vslt_wu, LARCH_V4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vslt_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vslti_bu, LARCH_V16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vslti_hu, LARCH_V8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vslti_wu, LARCH_V4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vslti_du, LARCH_V2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vsle_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsle_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsle_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsle_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vslei_b, LARCH_V16QI_FTYPE_V16QI_QI),
+  LSX_BUILTIN (vslei_h, LARCH_V8HI_FTYPE_V8HI_QI),
+  LSX_BUILTIN (vslei_w, LARCH_V4SI_FTYPE_V4SI_QI),
+  LSX_BUILTIN (vslei_d, LARCH_V2DI_FTYPE_V2DI_QI),
+  LSX_BUILTIN (vsle_bu, LARCH_V16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vsle_hu, LARCH_V8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vsle_wu, LARCH_V4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vsle_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vslei_bu, LARCH_V16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vslei_hu, LARCH_V8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vslei_wu, LARCH_V4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vslei_du, LARCH_V2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vsat_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsat_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsat_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsat_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vsat_bu, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vsat_hu, LARCH_UV8HI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vsat_wu, LARCH_UV4SI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vsat_du, LARCH_UV2DI_FTYPE_UV2DI_UQI),
+  LSX_BUILTIN (vadda_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vadda_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vadda_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vadda_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsadd_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsadd_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsadd_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsadd_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsadd_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vsadd_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vsadd_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vsadd_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vavg_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vavg_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vavg_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vavg_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vavg_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vavg_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vavg_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vavg_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vavgr_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vavgr_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vavgr_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vavgr_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vavgr_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vavgr_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vavgr_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vavgr_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vssub_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vssub_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vssub_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vssub_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssub_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vssub_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vssub_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vssub_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vabsd_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vabsd_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vabsd_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vabsd_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vabsd_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vabsd_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vabsd_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vabsd_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vmul_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmul_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmul_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmul_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmadd_b, LARCH_V16QI_FTYPE_V16QI_V16QI_V16QI),
+  LSX_BUILTIN (vmadd_h, LARCH_V8HI_FTYPE_V8HI_V8HI_V8HI),
+  LSX_BUILTIN (vmadd_w, LARCH_V4SI_FTYPE_V4SI_V4SI_V4SI),
+  LSX_BUILTIN (vmadd_d, LARCH_V2DI_FTYPE_V2DI_V2DI_V2DI),
+  LSX_BUILTIN (vmsub_b, LARCH_V16QI_FTYPE_V16QI_V16QI_V16QI),
+  LSX_BUILTIN (vmsub_h, LARCH_V8HI_FTYPE_V8HI_V8HI_V8HI),
+  LSX_BUILTIN (vmsub_w, LARCH_V4SI_FTYPE_V4SI_V4SI_V4SI),
+  LSX_BUILTIN (vmsub_d, LARCH_V2DI_FTYPE_V2DI_V2DI_V2DI),
+  LSX_BUILTIN (vdiv_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vdiv_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vdiv_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vdiv_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vdiv_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vdiv_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vdiv_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vdiv_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vhaddw_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vhaddw_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vhaddw_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vhaddw_hu_bu, LARCH_UV8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vhaddw_wu_hu, LARCH_UV4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vhaddw_du_wu, LARCH_UV2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vhsubw_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vhsubw_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vhsubw_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vhsubw_hu_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vhsubw_wu_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vhsubw_du_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmod_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmod_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmod_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmod_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmod_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vmod_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vmod_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmod_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vreplve_b, LARCH_V16QI_FTYPE_V16QI_SI),
+  LSX_BUILTIN (vreplve_h, LARCH_V8HI_FTYPE_V8HI_SI),
+  LSX_BUILTIN (vreplve_w, LARCH_V4SI_FTYPE_V4SI_SI),
+  LSX_BUILTIN (vreplve_d, LARCH_V2DI_FTYPE_V2DI_SI),
+  LSX_BUILTIN (vreplvei_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vreplvei_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vreplvei_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vreplvei_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vpickev_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vpickev_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vpickev_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vpickev_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vpickod_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vpickod_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vpickod_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vpickod_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vilvh_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vilvh_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vilvh_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vilvh_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vilvl_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vilvl_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vilvl_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vilvl_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vpackev_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vpackev_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vpackev_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vpackev_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vpackod_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vpackod_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vpackod_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vpackod_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vshuf_h, LARCH_V8HI_FTYPE_V8HI_V8HI_V8HI),
+  LSX_BUILTIN (vshuf_w, LARCH_V4SI_FTYPE_V4SI_V4SI_V4SI),
+  LSX_BUILTIN (vshuf_d, LARCH_V2DI_FTYPE_V2DI_V2DI_V2DI),
+  LSX_BUILTIN (vand_v, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vandi_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vor_v, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vori_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vnor_v, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vnori_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vxor_v, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vxori_b, LARCH_UV16QI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vbitsel_v, LARCH_UV16QI_FTYPE_UV16QI_UV16QI_UV16QI),
+  LSX_BUILTIN (vbitseli_b, LARCH_UV16QI_FTYPE_UV16QI_UV16QI_USI),
+  LSX_BUILTIN (vshuf4i_b, LARCH_V16QI_FTYPE_V16QI_USI),
+  LSX_BUILTIN (vshuf4i_h, LARCH_V8HI_FTYPE_V8HI_USI),
+  LSX_BUILTIN (vshuf4i_w, LARCH_V4SI_FTYPE_V4SI_USI),
+  LSX_BUILTIN (vreplgr2vr_b, LARCH_V16QI_FTYPE_SI),
+  LSX_BUILTIN (vreplgr2vr_h, LARCH_V8HI_FTYPE_SI),
+  LSX_BUILTIN (vreplgr2vr_w, LARCH_V4SI_FTYPE_SI),
+  LSX_BUILTIN (vreplgr2vr_d, LARCH_V2DI_FTYPE_DI),
+  LSX_BUILTIN (vpcnt_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vpcnt_h, LARCH_V8HI_FTYPE_V8HI),
+  LSX_BUILTIN (vpcnt_w, LARCH_V4SI_FTYPE_V4SI),
+  LSX_BUILTIN (vpcnt_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vclo_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vclo_h, LARCH_V8HI_FTYPE_V8HI),
+  LSX_BUILTIN (vclo_w, LARCH_V4SI_FTYPE_V4SI),
+  LSX_BUILTIN (vclo_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vclz_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vclz_h, LARCH_V8HI_FTYPE_V8HI),
+  LSX_BUILTIN (vclz_w, LARCH_V4SI_FTYPE_V4SI),
+  LSX_BUILTIN (vclz_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vpickve2gr_b, LARCH_SI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vpickve2gr_h, LARCH_SI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vpickve2gr_w, LARCH_SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vpickve2gr_d, LARCH_DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vpickve2gr_bu, LARCH_USI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vpickve2gr_hu, LARCH_USI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vpickve2gr_wu, LARCH_USI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vpickve2gr_du, LARCH_UDI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vinsgr2vr_b, LARCH_V16QI_FTYPE_V16QI_SI_UQI),
+  LSX_BUILTIN (vinsgr2vr_h, LARCH_V8HI_FTYPE_V8HI_SI_UQI),
+  LSX_BUILTIN (vinsgr2vr_w, LARCH_V4SI_FTYPE_V4SI_SI_UQI),
+  LSX_BUILTIN (vinsgr2vr_d, LARCH_V2DI_FTYPE_V2DI_DI_UQI),
+  LSX_BUILTIN_TEST_BRANCH (bnz_b, LARCH_SI_FTYPE_UV16QI),
+  LSX_BUILTIN_TEST_BRANCH (bnz_h, LARCH_SI_FTYPE_UV8HI),
+  LSX_BUILTIN_TEST_BRANCH (bnz_w, LARCH_SI_FTYPE_UV4SI),
+  LSX_BUILTIN_TEST_BRANCH (bnz_d, LARCH_SI_FTYPE_UV2DI),
+  LSX_BUILTIN_TEST_BRANCH (bz_b, LARCH_SI_FTYPE_UV16QI),
+  LSX_BUILTIN_TEST_BRANCH (bz_h, LARCH_SI_FTYPE_UV8HI),
+  LSX_BUILTIN_TEST_BRANCH (bz_w, LARCH_SI_FTYPE_UV4SI),
+  LSX_BUILTIN_TEST_BRANCH (bz_d, LARCH_SI_FTYPE_UV2DI),
+  LSX_BUILTIN_TEST_BRANCH (bz_v, LARCH_SI_FTYPE_UV16QI),
+  LSX_BUILTIN_TEST_BRANCH (bnz_v,	LARCH_SI_FTYPE_UV16QI),
+  LSX_BUILTIN (vrepli_b, LARCH_V16QI_FTYPE_HI),
+  LSX_BUILTIN (vrepli_h, LARCH_V8HI_FTYPE_HI),
+  LSX_BUILTIN (vrepli_w, LARCH_V4SI_FTYPE_HI),
+  LSX_BUILTIN (vrepli_d, LARCH_V2DI_FTYPE_HI),
+  LSX_BUILTIN (vfcmp_caf_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_caf_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cor_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cor_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cun_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cun_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cune_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cune_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cueq_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cueq_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_ceq_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_ceq_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cne_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cne_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_clt_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_clt_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cult_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cult_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cle_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cle_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_cule_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_cule_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_saf_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_saf_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sor_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sor_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sun_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sun_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sune_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sune_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sueq_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sueq_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_seq_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_seq_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sne_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sne_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_slt_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_slt_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sult_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sult_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sle_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sle_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcmp_sule_s, LARCH_V4SI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcmp_sule_d, LARCH_V2DI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfadd_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfadd_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfsub_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfsub_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfmul_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfmul_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfdiv_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfdiv_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfcvt_h_s, LARCH_V8HI_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfcvt_s_d, LARCH_V4SF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfmin_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfmin_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfmina_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfmina_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfmax_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfmax_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfmaxa_s, LARCH_V4SF_FTYPE_V4SF_V4SF),
+  LSX_BUILTIN (vfmaxa_d, LARCH_V2DF_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vfclass_s, LARCH_V4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vfclass_d, LARCH_V2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vfsqrt_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfsqrt_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfrecip_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrecip_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfrint_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrint_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfrsqrt_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrsqrt_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vflogb_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vflogb_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfcvth_s_h, LARCH_V4SF_FTYPE_V8HI),
+  LSX_BUILTIN (vfcvth_d_s, LARCH_V2DF_FTYPE_V4SF),
+  LSX_BUILTIN (vfcvtl_s_h, LARCH_V4SF_FTYPE_V8HI),
+  LSX_BUILTIN (vfcvtl_d_s, LARCH_V2DF_FTYPE_V4SF),
+  LSX_BUILTIN (vftint_w_s, LARCH_V4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftint_l_d, LARCH_V2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vftint_wu_s, LARCH_UV4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftint_lu_d, LARCH_UV2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vftintrz_w_s, LARCH_V4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrz_l_d, LARCH_V2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vftintrz_wu_s, LARCH_UV4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrz_lu_d, LARCH_UV2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vffint_s_w, LARCH_V4SF_FTYPE_V4SI),
+  LSX_BUILTIN (vffint_d_l, LARCH_V2DF_FTYPE_V2DI),
+  LSX_BUILTIN (vffint_s_wu, LARCH_V4SF_FTYPE_UV4SI),
+  LSX_BUILTIN (vffint_d_lu, LARCH_V2DF_FTYPE_UV2DI),
+
+  LSX_BUILTIN (vandn_v, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vneg_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vneg_h, LARCH_V8HI_FTYPE_V8HI),
+  LSX_BUILTIN (vneg_w, LARCH_V4SI_FTYPE_V4SI),
+  LSX_BUILTIN (vneg_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vmuh_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmuh_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmuh_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmuh_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmuh_bu, LARCH_UV16QI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vmuh_hu, LARCH_UV8HI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vmuh_wu, LARCH_UV4SI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmuh_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vsllwil_h_b, LARCH_V8HI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vsllwil_w_h, LARCH_V4SI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vsllwil_d_w, LARCH_V2DI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vsllwil_hu_bu, LARCH_UV8HI_FTYPE_UV16QI_UQI),
+  LSX_BUILTIN (vsllwil_wu_hu, LARCH_UV4SI_FTYPE_UV8HI_UQI),
+  LSX_BUILTIN (vsllwil_du_wu, LARCH_UV2DI_FTYPE_UV4SI_UQI),
+  LSX_BUILTIN (vsran_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsran_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsran_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssran_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vssran_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vssran_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssran_bu_h, LARCH_UV16QI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vssran_hu_w, LARCH_UV8HI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vssran_wu_d, LARCH_UV4SI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vsrarn_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsrarn_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsrarn_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssrarn_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vssrarn_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vssrarn_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssrarn_bu_h, LARCH_UV16QI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vssrarn_hu_w, LARCH_UV8HI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vssrarn_wu_d, LARCH_UV4SI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vsrln_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsrln_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsrln_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssrln_bu_h, LARCH_UV16QI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vssrln_hu_w, LARCH_UV8HI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vssrln_wu_d, LARCH_UV4SI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vsrlrn_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsrlrn_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsrlrn_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssrlrn_bu_h, LARCH_UV16QI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vssrlrn_hu_w, LARCH_UV8HI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vssrlrn_wu_d, LARCH_UV4SI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vfrstpi_b, LARCH_V16QI_FTYPE_V16QI_V16QI_UQI),
+  LSX_BUILTIN (vfrstpi_h, LARCH_V8HI_FTYPE_V8HI_V8HI_UQI),
+  LSX_BUILTIN (vfrstp_b, LARCH_V16QI_FTYPE_V16QI_V16QI_V16QI),
+  LSX_BUILTIN (vfrstp_h, LARCH_V8HI_FTYPE_V8HI_V8HI_V8HI),
+  LSX_BUILTIN (vshuf4i_d, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vbsrl_v, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vbsll_v, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vextrins_b, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vextrins_h, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vextrins_w, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vextrins_d, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vmskltz_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vmskltz_h, LARCH_V8HI_FTYPE_V8HI),
+  LSX_BUILTIN (vmskltz_w, LARCH_V4SI_FTYPE_V4SI),
+  LSX_BUILTIN (vmskltz_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vsigncov_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsigncov_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsigncov_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsigncov_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vfmadd_s, LARCH_V4SF_FTYPE_V4SF_V4SF_V4SF),
+  LSX_BUILTIN (vfmadd_d, LARCH_V2DF_FTYPE_V2DF_V2DF_V2DF),
+  LSX_BUILTIN (vfmsub_s, LARCH_V4SF_FTYPE_V4SF_V4SF_V4SF),
+  LSX_BUILTIN (vfmsub_d, LARCH_V2DF_FTYPE_V2DF_V2DF_V2DF),
+  LSX_BUILTIN (vfnmadd_s, LARCH_V4SF_FTYPE_V4SF_V4SF_V4SF),
+  LSX_BUILTIN (vfnmadd_d, LARCH_V2DF_FTYPE_V2DF_V2DF_V2DF),
+  LSX_BUILTIN (vfnmsub_s, LARCH_V4SF_FTYPE_V4SF_V4SF_V4SF),
+  LSX_BUILTIN (vfnmsub_d, LARCH_V2DF_FTYPE_V2DF_V2DF_V2DF),
+  LSX_BUILTIN (vftintrne_w_s, LARCH_V4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrne_l_d, LARCH_V2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vftintrp_w_s, LARCH_V4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrp_l_d, LARCH_V2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vftintrm_w_s, LARCH_V4SI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrm_l_d, LARCH_V2DI_FTYPE_V2DF),
+  LSX_BUILTIN (vftint_w_d, LARCH_V4SI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vffint_s_l, LARCH_V4SF_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vftintrz_w_d, LARCH_V4SI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vftintrp_w_d, LARCH_V4SI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vftintrm_w_d, LARCH_V4SI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vftintrne_w_d, LARCH_V4SI_FTYPE_V2DF_V2DF),
+  LSX_BUILTIN (vftintl_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftinth_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vffinth_d_w, LARCH_V2DF_FTYPE_V4SI),
+  LSX_BUILTIN (vffintl_d_w, LARCH_V2DF_FTYPE_V4SI),
+  LSX_BUILTIN (vftintrzl_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrzh_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrpl_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrph_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrml_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrmh_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrnel_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vftintrneh_l_s, LARCH_V2DI_FTYPE_V4SF),
+  LSX_BUILTIN (vfrintrne_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrintrne_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfrintrz_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrintrz_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfrintrp_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrintrp_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_BUILTIN (vfrintrm_s, LARCH_V4SF_FTYPE_V4SF),
+  LSX_BUILTIN (vfrintrm_d, LARCH_V2DF_FTYPE_V2DF),
+  LSX_NO_TARGET_BUILTIN (vstelm_b, LARCH_VOID_FTYPE_V16QI_CVPOINTER_SI_UQI),
+  LSX_NO_TARGET_BUILTIN (vstelm_h, LARCH_VOID_FTYPE_V8HI_CVPOINTER_SI_UQI),
+  LSX_NO_TARGET_BUILTIN (vstelm_w, LARCH_VOID_FTYPE_V4SI_CVPOINTER_SI_UQI),
+  LSX_NO_TARGET_BUILTIN (vstelm_d, LARCH_VOID_FTYPE_V2DI_CVPOINTER_SI_UQI),
+  LSX_BUILTIN (vaddwev_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vaddwev_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vaddwev_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vaddwod_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vaddwod_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vaddwod_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vaddwev_d_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vaddwev_w_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vaddwev_h_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vaddwod_d_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vaddwod_w_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vaddwod_h_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vaddwev_d_wu_w, LARCH_V2DI_FTYPE_UV4SI_V4SI),
+  LSX_BUILTIN (vaddwev_w_hu_h, LARCH_V4SI_FTYPE_UV8HI_V8HI),
+  LSX_BUILTIN (vaddwev_h_bu_b, LARCH_V8HI_FTYPE_UV16QI_V16QI),
+  LSX_BUILTIN (vaddwod_d_wu_w, LARCH_V2DI_FTYPE_UV4SI_V4SI),
+  LSX_BUILTIN (vaddwod_w_hu_h, LARCH_V4SI_FTYPE_UV8HI_V8HI),
+  LSX_BUILTIN (vaddwod_h_bu_b, LARCH_V8HI_FTYPE_UV16QI_V16QI),
+  LSX_BUILTIN (vsubwev_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsubwev_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsubwev_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsubwod_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vsubwod_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vsubwod_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vsubwev_d_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vsubwev_w_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vsubwev_h_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vsubwod_d_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vsubwod_w_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vsubwod_h_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vaddwev_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vaddwod_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vaddwev_q_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vaddwod_q_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vsubwev_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsubwod_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsubwev_q_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vsubwod_q_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vaddwev_q_du_d, LARCH_V2DI_FTYPE_UV2DI_V2DI),
+  LSX_BUILTIN (vaddwod_q_du_d, LARCH_V2DI_FTYPE_UV2DI_V2DI),
+
+  LSX_BUILTIN (vmulwev_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmulwev_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmulwev_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmulwod_d_w, LARCH_V2DI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vmulwod_w_h, LARCH_V4SI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vmulwod_h_b, LARCH_V8HI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vmulwev_d_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmulwev_w_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vmulwev_h_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vmulwod_d_wu, LARCH_V2DI_FTYPE_UV4SI_UV4SI),
+  LSX_BUILTIN (vmulwod_w_hu, LARCH_V4SI_FTYPE_UV8HI_UV8HI),
+  LSX_BUILTIN (vmulwod_h_bu, LARCH_V8HI_FTYPE_UV16QI_UV16QI),
+  LSX_BUILTIN (vmulwev_d_wu_w, LARCH_V2DI_FTYPE_UV4SI_V4SI),
+  LSX_BUILTIN (vmulwev_w_hu_h, LARCH_V4SI_FTYPE_UV8HI_V8HI),
+  LSX_BUILTIN (vmulwev_h_bu_b, LARCH_V8HI_FTYPE_UV16QI_V16QI),
+  LSX_BUILTIN (vmulwod_d_wu_w, LARCH_V2DI_FTYPE_UV4SI_V4SI),
+  LSX_BUILTIN (vmulwod_w_hu_h, LARCH_V4SI_FTYPE_UV8HI_V8HI),
+  LSX_BUILTIN (vmulwod_h_bu_b, LARCH_V8HI_FTYPE_UV16QI_V16QI),
+  LSX_BUILTIN (vmulwev_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmulwod_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vmulwev_q_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vmulwod_q_du, LARCH_V2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vmulwev_q_du_d, LARCH_V2DI_FTYPE_UV2DI_V2DI),
+  LSX_BUILTIN (vmulwod_q_du_d, LARCH_V2DI_FTYPE_UV2DI_V2DI),
+  LSX_BUILTIN (vhaddw_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vhaddw_qu_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vhsubw_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vhsubw_qu_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI),
+  LSX_BUILTIN (vmaddwev_d_w, LARCH_V2DI_FTYPE_V2DI_V4SI_V4SI),
+  LSX_BUILTIN (vmaddwev_w_h, LARCH_V4SI_FTYPE_V4SI_V8HI_V8HI),
+  LSX_BUILTIN (vmaddwev_h_b, LARCH_V8HI_FTYPE_V8HI_V16QI_V16QI),
+  LSX_BUILTIN (vmaddwev_d_wu, LARCH_UV2DI_FTYPE_UV2DI_UV4SI_UV4SI),
+  LSX_BUILTIN (vmaddwev_w_hu, LARCH_UV4SI_FTYPE_UV4SI_UV8HI_UV8HI),
+  LSX_BUILTIN (vmaddwev_h_bu, LARCH_UV8HI_FTYPE_UV8HI_UV16QI_UV16QI),
+  LSX_BUILTIN (vmaddwod_d_w, LARCH_V2DI_FTYPE_V2DI_V4SI_V4SI),
+  LSX_BUILTIN (vmaddwod_w_h, LARCH_V4SI_FTYPE_V4SI_V8HI_V8HI),
+  LSX_BUILTIN (vmaddwod_h_b, LARCH_V8HI_FTYPE_V8HI_V16QI_V16QI),
+  LSX_BUILTIN (vmaddwod_d_wu, LARCH_UV2DI_FTYPE_UV2DI_UV4SI_UV4SI),
+  LSX_BUILTIN (vmaddwod_w_hu, LARCH_UV4SI_FTYPE_UV4SI_UV8HI_UV8HI),
+  LSX_BUILTIN (vmaddwod_h_bu, LARCH_UV8HI_FTYPE_UV8HI_UV16QI_UV16QI),
+  LSX_BUILTIN (vmaddwev_d_wu_w, LARCH_V2DI_FTYPE_V2DI_UV4SI_V4SI),
+  LSX_BUILTIN (vmaddwev_w_hu_h, LARCH_V4SI_FTYPE_V4SI_UV8HI_V8HI),
+  LSX_BUILTIN (vmaddwev_h_bu_b, LARCH_V8HI_FTYPE_V8HI_UV16QI_V16QI),
+  LSX_BUILTIN (vmaddwod_d_wu_w, LARCH_V2DI_FTYPE_V2DI_UV4SI_V4SI),
+  LSX_BUILTIN (vmaddwod_w_hu_h, LARCH_V4SI_FTYPE_V4SI_UV8HI_V8HI),
+  LSX_BUILTIN (vmaddwod_h_bu_b, LARCH_V8HI_FTYPE_V8HI_UV16QI_V16QI),
+  LSX_BUILTIN (vmaddwev_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI_V2DI),
+  LSX_BUILTIN (vmaddwod_q_d, LARCH_V2DI_FTYPE_V2DI_V2DI_V2DI),
+  LSX_BUILTIN (vmaddwev_q_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI_UV2DI),
+  LSX_BUILTIN (vmaddwod_q_du, LARCH_UV2DI_FTYPE_UV2DI_UV2DI_UV2DI),
+  LSX_BUILTIN (vmaddwev_q_du_d, LARCH_V2DI_FTYPE_V2DI_UV2DI_V2DI),
+  LSX_BUILTIN (vmaddwod_q_du_d, LARCH_V2DI_FTYPE_V2DI_UV2DI_V2DI),
+  LSX_BUILTIN (vrotr_b, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vrotr_h, LARCH_V8HI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vrotr_w, LARCH_V4SI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vrotr_d, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vadd_q, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vsub_q, LARCH_V2DI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vldrepl_b, LARCH_V16QI_FTYPE_CVPOINTER_SI),
+  LSX_BUILTIN (vldrepl_h, LARCH_V8HI_FTYPE_CVPOINTER_SI),
+  LSX_BUILTIN (vldrepl_w, LARCH_V4SI_FTYPE_CVPOINTER_SI),
+  LSX_BUILTIN (vldrepl_d, LARCH_V2DI_FTYPE_CVPOINTER_SI),
+
+  LSX_BUILTIN (vmskgez_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vmsknz_b, LARCH_V16QI_FTYPE_V16QI),
+  LSX_BUILTIN (vexth_h_b, LARCH_V8HI_FTYPE_V16QI),
+  LSX_BUILTIN (vexth_w_h, LARCH_V4SI_FTYPE_V8HI),
+  LSX_BUILTIN (vexth_d_w, LARCH_V2DI_FTYPE_V4SI),
+  LSX_BUILTIN (vexth_q_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vexth_hu_bu, LARCH_UV8HI_FTYPE_UV16QI),
+  LSX_BUILTIN (vexth_wu_hu, LARCH_UV4SI_FTYPE_UV8HI),
+  LSX_BUILTIN (vexth_du_wu, LARCH_UV2DI_FTYPE_UV4SI),
+  LSX_BUILTIN (vexth_qu_du, LARCH_UV2DI_FTYPE_UV2DI),
+  LSX_BUILTIN (vrotri_b, LARCH_V16QI_FTYPE_V16QI_UQI),
+  LSX_BUILTIN (vrotri_h, LARCH_V8HI_FTYPE_V8HI_UQI),
+  LSX_BUILTIN (vrotri_w, LARCH_V4SI_FTYPE_V4SI_UQI),
+  LSX_BUILTIN (vrotri_d, LARCH_V2DI_FTYPE_V2DI_UQI),
+  LSX_BUILTIN (vextl_q_d, LARCH_V2DI_FTYPE_V2DI),
+  LSX_BUILTIN (vsrlni_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vsrlni_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vsrlni_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vsrlni_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vsrlrni_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vsrlrni_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vsrlrni_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vsrlrni_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vssrlni_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vssrlni_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vssrlni_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vssrlni_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vssrlni_bu_h, LARCH_UV16QI_FTYPE_UV16QI_V16QI_USI),
+  LSX_BUILTIN (vssrlni_hu_w, LARCH_UV8HI_FTYPE_UV8HI_V8HI_USI),
+  LSX_BUILTIN (vssrlni_wu_d, LARCH_UV4SI_FTYPE_UV4SI_V4SI_USI),
+  LSX_BUILTIN (vssrlni_du_q, LARCH_UV2DI_FTYPE_UV2DI_V2DI_USI),
+  LSX_BUILTIN (vssrlrni_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vssrlrni_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vssrlrni_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vssrlrni_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vssrlrni_bu_h, LARCH_UV16QI_FTYPE_UV16QI_V16QI_USI),
+  LSX_BUILTIN (vssrlrni_hu_w, LARCH_UV8HI_FTYPE_UV8HI_V8HI_USI),
+  LSX_BUILTIN (vssrlrni_wu_d, LARCH_UV4SI_FTYPE_UV4SI_V4SI_USI),
+  LSX_BUILTIN (vssrlrni_du_q, LARCH_UV2DI_FTYPE_UV2DI_V2DI_USI),
+  LSX_BUILTIN (vsrani_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vsrani_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vsrani_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vsrani_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vsrarni_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vsrarni_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vsrarni_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vsrarni_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vssrani_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vssrani_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vssrani_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vssrani_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vssrani_bu_h, LARCH_UV16QI_FTYPE_UV16QI_V16QI_USI),
+  LSX_BUILTIN (vssrani_hu_w, LARCH_UV8HI_FTYPE_UV8HI_V8HI_USI),
+  LSX_BUILTIN (vssrani_wu_d, LARCH_UV4SI_FTYPE_UV4SI_V4SI_USI),
+  LSX_BUILTIN (vssrani_du_q, LARCH_UV2DI_FTYPE_UV2DI_V2DI_USI),
+  LSX_BUILTIN (vssrarni_b_h, LARCH_V16QI_FTYPE_V16QI_V16QI_USI),
+  LSX_BUILTIN (vssrarni_h_w, LARCH_V8HI_FTYPE_V8HI_V8HI_USI),
+  LSX_BUILTIN (vssrarni_w_d, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vssrarni_d_q, LARCH_V2DI_FTYPE_V2DI_V2DI_USI),
+  LSX_BUILTIN (vssrarni_bu_h, LARCH_UV16QI_FTYPE_UV16QI_V16QI_USI),
+  LSX_BUILTIN (vssrarni_hu_w, LARCH_UV8HI_FTYPE_UV8HI_V8HI_USI),
+  LSX_BUILTIN (vssrarni_wu_d, LARCH_UV4SI_FTYPE_UV4SI_V4SI_USI),
+  LSX_BUILTIN (vssrarni_du_q, LARCH_UV2DI_FTYPE_UV2DI_V2DI_USI),
+  LSX_BUILTIN (vpermi_w, LARCH_V4SI_FTYPE_V4SI_V4SI_USI),
+  LSX_BUILTIN (vld, LARCH_V16QI_FTYPE_CVPOINTER_SI),
+  LSX_NO_TARGET_BUILTIN (vst, LARCH_VOID_FTYPE_V16QI_CVPOINTER_SI),
+  LSX_BUILTIN (vssrlrn_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vssrlrn_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vssrlrn_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vssrln_b_h, LARCH_V16QI_FTYPE_V8HI_V8HI),
+  LSX_BUILTIN (vssrln_h_w, LARCH_V8HI_FTYPE_V4SI_V4SI),
+  LSX_BUILTIN (vssrln_w_d, LARCH_V4SI_FTYPE_V2DI_V2DI),
+  LSX_BUILTIN (vorn_v, LARCH_V16QI_FTYPE_V16QI_V16QI),
+  LSX_BUILTIN (vldi, LARCH_V2DI_FTYPE_HI),
+  LSX_BUILTIN (vshuf_b, LARCH_V16QI_FTYPE_V16QI_V16QI_V16QI),
+  LSX_BUILTIN (vldx, LARCH_V16QI_FTYPE_CVPOINTER_DI),
+  LSX_NO_TARGET_BUILTIN (vstx, LARCH_VOID_FTYPE_V16QI_CVPOINTER_DI),
+  LSX_BUILTIN (vextl_qu_du, LARCH_UV2DI_FTYPE_UV2DI)
 };
 
 /* Index I is the function declaration for loongarch_builtins[I], or null if
@@ -193,11 +1219,46 @@ static GTY (()) tree loongarch_builtin_decls[ARRAY_SIZE (loongarch_builtins)];
    using the instruction code or return null if not defined for the target.  */
 static GTY (()) int loongarch_get_builtin_decl_index[NUM_INSN_CODES];
 
+
+/* MODE is a vector mode whose elements have type TYPE.  Return the type
+   of the vector itself.  */
+
+static tree
+loongarch_builtin_vector_type (tree type, machine_mode mode)
+{
+  static tree types[2 * (int) MAX_MACHINE_MODE];
+  int mode_index;
+
+  mode_index = (int) mode;
+
+  if (TREE_CODE (type) == INTEGER_TYPE && TYPE_UNSIGNED (type))
+    mode_index += MAX_MACHINE_MODE;
+
+  if (types[mode_index] == NULL_TREE)
+    types[mode_index] = build_vector_type_for_mode (type, mode);
+  return types[mode_index];
+}
+
+/* Return a type for 'const volatile void *'.  */
+
+static tree
+loongarch_build_cvpointer_type (void)
+{
+  static tree cache;
+
+  if (cache == NULL_TREE)
+    cache = build_pointer_type (build_qualified_type (void_type_node,
+						      TYPE_QUAL_CONST
+						      | TYPE_QUAL_VOLATILE));
+  return cache;
+}
+
 /* Source-level argument types.  */
 #define LARCH_ATYPE_VOID void_type_node
 #define LARCH_ATYPE_INT integer_type_node
 #define LARCH_ATYPE_POINTER ptr_type_node
-
+#define LARCH_ATYPE_CVPOINTER loongarch_build_cvpointer_type ()
+#define LARCH_ATYPE_BOOLEAN boolean_type_node
 /* Standard mode-based argument types.  */
 #define LARCH_ATYPE_QI intQI_type_node
 #define LARCH_ATYPE_UQI unsigned_intQI_type_node
@@ -210,6 +1271,72 @@ static GTY (()) int loongarch_get_builtin_decl_index[NUM_INSN_CODES];
 #define LARCH_ATYPE_SF float_type_node
 #define LARCH_ATYPE_DF double_type_node
 
+/* Vector argument types.  */
+#define LARCH_ATYPE_V2SF						\
+  loongarch_builtin_vector_type (float_type_node, V2SFmode)
+#define LARCH_ATYPE_V2HI						\
+  loongarch_builtin_vector_type (intHI_type_node, V2HImode)
+#define LARCH_ATYPE_V2SI						\
+  loongarch_builtin_vector_type (intSI_type_node, V2SImode)
+#define LARCH_ATYPE_V4QI						\
+  loongarch_builtin_vector_type (intQI_type_node, V4QImode)
+#define LARCH_ATYPE_V4HI						\
+  loongarch_builtin_vector_type (intHI_type_node, V4HImode)
+#define LARCH_ATYPE_V8QI						\
+  loongarch_builtin_vector_type (intQI_type_node, V8QImode)
+
+#define LARCH_ATYPE_V2DI						\
+  loongarch_builtin_vector_type (long_long_integer_type_node, V2DImode)
+#define LARCH_ATYPE_V4SI						\
+  loongarch_builtin_vector_type (intSI_type_node, V4SImode)
+#define LARCH_ATYPE_V8HI						\
+  loongarch_builtin_vector_type (intHI_type_node, V8HImode)
+#define LARCH_ATYPE_V16QI						\
+  loongarch_builtin_vector_type (intQI_type_node, V16QImode)
+#define LARCH_ATYPE_V2DF						\
+  loongarch_builtin_vector_type (double_type_node, V2DFmode)
+#define LARCH_ATYPE_V4SF						\
+  loongarch_builtin_vector_type (float_type_node, V4SFmode)
+
+/* LoongArch ASX.  */
+#define LARCH_ATYPE_V4DI						\
+  loongarch_builtin_vector_type (long_long_integer_type_node, V4DImode)
+#define LARCH_ATYPE_V8SI						\
+  loongarch_builtin_vector_type (intSI_type_node, V8SImode)
+#define LARCH_ATYPE_V16HI						\
+  loongarch_builtin_vector_type (intHI_type_node, V16HImode)
+#define LARCH_ATYPE_V32QI						\
+  loongarch_builtin_vector_type (intQI_type_node, V32QImode)
+#define LARCH_ATYPE_V4DF						\
+  loongarch_builtin_vector_type (double_type_node, V4DFmode)
+#define LARCH_ATYPE_V8SF						\
+  loongarch_builtin_vector_type (float_type_node, V8SFmode)
+
+#define LARCH_ATYPE_UV2DI					\
+  loongarch_builtin_vector_type (long_long_unsigned_type_node, V2DImode)
+#define LARCH_ATYPE_UV4SI					\
+  loongarch_builtin_vector_type (unsigned_intSI_type_node, V4SImode)
+#define LARCH_ATYPE_UV8HI					\
+  loongarch_builtin_vector_type (unsigned_intHI_type_node, V8HImode)
+#define LARCH_ATYPE_UV16QI					\
+  loongarch_builtin_vector_type (unsigned_intQI_type_node, V16QImode)
+
+#define LARCH_ATYPE_UV4DI					\
+  loongarch_builtin_vector_type (long_long_unsigned_type_node, V4DImode)
+#define LARCH_ATYPE_UV8SI					\
+  loongarch_builtin_vector_type (unsigned_intSI_type_node, V8SImode)
+#define LARCH_ATYPE_UV16HI					\
+  loongarch_builtin_vector_type (unsigned_intHI_type_node, V16HImode)
+#define LARCH_ATYPE_UV32QI					\
+  loongarch_builtin_vector_type (unsigned_intQI_type_node, V32QImode)
+
+#define LARCH_ATYPE_UV2SI					\
+  loongarch_builtin_vector_type (unsigned_intSI_type_node, V2SImode)
+#define LARCH_ATYPE_UV4HI					\
+  loongarch_builtin_vector_type (unsigned_intHI_type_node, V4HImode)
+#define LARCH_ATYPE_UV8QI					\
+  loongarch_builtin_vector_type (unsigned_intQI_type_node, V8QImode)
+
 /* LARCH_FTYPE_ATYPESN takes N LARCH_FTYPES-like type codes and lists
    their associated LARCH_ATYPEs.  */
 #define LARCH_FTYPE_ATYPES1(A, B) LARCH_ATYPE_##A, LARCH_ATYPE_##B
@@ -283,6 +1410,92 @@ loongarch_builtin_decl (unsigned int code, bool initialize_p ATTRIBUTE_UNUSED)
   return loongarch_builtin_decls[code];
 }
 
+/* Implement TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION.  */
+
+tree
+loongarch_builtin_vectorized_function (unsigned int fn, tree type_out,
+				       tree type_in)
+{
+  machine_mode in_mode, out_mode;
+  int in_n, out_n;
+
+  if (TREE_CODE (type_out) != VECTOR_TYPE
+      || TREE_CODE (type_in) != VECTOR_TYPE
+      || !ISA_HAS_LSX)
+    return NULL_TREE;
+
+  out_mode = TYPE_MODE (TREE_TYPE (type_out));
+  out_n = TYPE_VECTOR_SUBPARTS (type_out);
+  in_mode = TYPE_MODE (TREE_TYPE (type_in));
+  in_n = TYPE_VECTOR_SUBPARTS (type_in);
+
+  /* INSN is the name of the associated instruction pattern, without
+     the leading CODE_FOR_.  */
+#define LARCH_GET_BUILTIN(INSN) \
+  loongarch_builtin_decls[loongarch_get_builtin_decl_index[CODE_FOR_##INSN]]
+
+  switch (fn)
+    {
+    CASE_CFN_CEIL:
+      if (out_mode == DFmode && in_mode == DFmode)
+    {
+      if (out_n == 2 && in_n == 2)
+	return LARCH_GET_BUILTIN (lsx_vfrintrp_d);
+    }
+      if (out_mode == SFmode && in_mode == SFmode)
+    {
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lsx_vfrintrp_s);
+    }
+      break;
+
+    CASE_CFN_TRUNC:
+      if (out_mode == DFmode && in_mode == DFmode)
+    {
+      if (out_n == 2 && in_n == 2)
+	return LARCH_GET_BUILTIN (lsx_vfrintrz_d);
+    }
+      if (out_mode == SFmode && in_mode == SFmode)
+    {
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lsx_vfrintrz_s);
+    }
+      break;
+
+    CASE_CFN_RINT:
+    CASE_CFN_ROUND:
+      if (out_mode == DFmode && in_mode == DFmode)
+    {
+      if (out_n == 2 && in_n == 2)
+	return LARCH_GET_BUILTIN (lsx_vfrint_d);
+    }
+      if (out_mode == SFmode && in_mode == SFmode)
+    {
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lsx_vfrint_s);
+    }
+      break;
+
+    CASE_CFN_FLOOR:
+      if (out_mode == DFmode && in_mode == DFmode)
+    {
+      if (out_n == 2 && in_n == 2)
+	return LARCH_GET_BUILTIN (lsx_vfrintrm_d);
+    }
+      if (out_mode == SFmode && in_mode == SFmode)
+    {
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lsx_vfrintrm_s);
+    }
+      break;
+
+    default:
+      break;
+    }
+
+  return NULL_TREE;
+}
+
 /* Take argument ARGNO from EXP's argument list and convert it into
    an expand operand.  Store the operand in *OP.  */
 
@@ -318,7 +1531,236 @@ static rtx
 loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
 			       struct expand_operand *ops, bool has_target_p)
 {
-  if (!maybe_expand_insn (icode, nops, ops))
+  machine_mode imode;
+  int rangelo = 0, rangehi = 0, error_opno = 0;
+
+  switch (icode)
+    {
+    case CODE_FOR_lsx_vaddi_bu:
+    case CODE_FOR_lsx_vaddi_hu:
+    case CODE_FOR_lsx_vaddi_wu:
+    case CODE_FOR_lsx_vaddi_du:
+    case CODE_FOR_lsx_vslti_bu:
+    case CODE_FOR_lsx_vslti_hu:
+    case CODE_FOR_lsx_vslti_wu:
+    case CODE_FOR_lsx_vslti_du:
+    case CODE_FOR_lsx_vslei_bu:
+    case CODE_FOR_lsx_vslei_hu:
+    case CODE_FOR_lsx_vslei_wu:
+    case CODE_FOR_lsx_vslei_du:
+    case CODE_FOR_lsx_vmaxi_bu:
+    case CODE_FOR_lsx_vmaxi_hu:
+    case CODE_FOR_lsx_vmaxi_wu:
+    case CODE_FOR_lsx_vmaxi_du:
+    case CODE_FOR_lsx_vmini_bu:
+    case CODE_FOR_lsx_vmini_hu:
+    case CODE_FOR_lsx_vmini_wu:
+    case CODE_FOR_lsx_vmini_du:
+    case CODE_FOR_lsx_vsubi_bu:
+    case CODE_FOR_lsx_vsubi_hu:
+    case CODE_FOR_lsx_vsubi_wu:
+    case CODE_FOR_lsx_vsubi_du:
+      gcc_assert (has_target_p && nops == 3);
+      /* We only generate a vector of constants iff the second argument
+	 is an immediate.  We also validate the range of the immediate.  */
+      if (CONST_INT_P (ops[2].value))
+	{
+	  rangelo = 0;
+	  rangehi = 31;
+	  if (IN_RANGE (INTVAL (ops[2].value), rangelo, rangehi))
+	    {
+	      ops[2].mode = ops[0].mode;
+	      ops[2].value = loongarch_gen_const_int_vector (ops[2].mode,
+							     INTVAL (ops[2].value));
+	    }
+	  else
+	    error_opno = 2;
+	}
+      break;
+
+    case CODE_FOR_lsx_vseqi_b:
+    case CODE_FOR_lsx_vseqi_h:
+    case CODE_FOR_lsx_vseqi_w:
+    case CODE_FOR_lsx_vseqi_d:
+    case CODE_FOR_lsx_vslti_b:
+    case CODE_FOR_lsx_vslti_h:
+    case CODE_FOR_lsx_vslti_w:
+    case CODE_FOR_lsx_vslti_d:
+    case CODE_FOR_lsx_vslei_b:
+    case CODE_FOR_lsx_vslei_h:
+    case CODE_FOR_lsx_vslei_w:
+    case CODE_FOR_lsx_vslei_d:
+    case CODE_FOR_lsx_vmaxi_b:
+    case CODE_FOR_lsx_vmaxi_h:
+    case CODE_FOR_lsx_vmaxi_w:
+    case CODE_FOR_lsx_vmaxi_d:
+    case CODE_FOR_lsx_vmini_b:
+    case CODE_FOR_lsx_vmini_h:
+    case CODE_FOR_lsx_vmini_w:
+    case CODE_FOR_lsx_vmini_d:
+      gcc_assert (has_target_p && nops == 3);
+      /* We only generate a vector of constants iff the second argument
+	 is an immediate.  We also validate the range of the immediate.  */
+      if (CONST_INT_P (ops[2].value))
+	{
+	  rangelo = -16;
+	  rangehi = 15;
+	  if (IN_RANGE (INTVAL (ops[2].value), rangelo, rangehi))
+	    {
+	      ops[2].mode = ops[0].mode;
+	      ops[2].value = loongarch_gen_const_int_vector (ops[2].mode,
+							     INTVAL (ops[2].value));
+	    }
+	  else
+	    error_opno = 2;
+	}
+      break;
+
+    case CODE_FOR_lsx_vandi_b:
+    case CODE_FOR_lsx_vori_b:
+    case CODE_FOR_lsx_vnori_b:
+    case CODE_FOR_lsx_vxori_b:
+      gcc_assert (has_target_p && nops == 3);
+      if (!CONST_INT_P (ops[2].value))
+	break;
+      ops[2].mode = ops[0].mode;
+      ops[2].value = loongarch_gen_const_int_vector (ops[2].mode,
+						     INTVAL (ops[2].value));
+      break;
+
+    case CODE_FOR_lsx_vbitseli_b:
+      gcc_assert (has_target_p && nops == 4);
+      if (!CONST_INT_P (ops[3].value))
+	break;
+      ops[3].mode = ops[0].mode;
+      ops[3].value = loongarch_gen_const_int_vector (ops[3].mode,
+						     INTVAL (ops[3].value));
+      break;
+
+    case CODE_FOR_lsx_vreplgr2vr_b:
+    case CODE_FOR_lsx_vreplgr2vr_h:
+    case CODE_FOR_lsx_vreplgr2vr_w:
+    case CODE_FOR_lsx_vreplgr2vr_d:
+      /* Map the built-ins to vector fill operations.  We need fix up the mode
+	 for the element being inserted.  */
+      gcc_assert (has_target_p && nops == 2);
+      imode = GET_MODE_INNER (ops[0].mode);
+      ops[1].value = lowpart_subreg (imode, ops[1].value, ops[1].mode);
+      ops[1].mode = imode;
+      break;
+
+    case CODE_FOR_lsx_vilvh_b:
+    case CODE_FOR_lsx_vilvh_h:
+    case CODE_FOR_lsx_vilvh_w:
+    case CODE_FOR_lsx_vilvh_d:
+    case CODE_FOR_lsx_vilvl_b:
+    case CODE_FOR_lsx_vilvl_h:
+    case CODE_FOR_lsx_vilvl_w:
+    case CODE_FOR_lsx_vilvl_d:
+    case CODE_FOR_lsx_vpackev_b:
+    case CODE_FOR_lsx_vpackev_h:
+    case CODE_FOR_lsx_vpackev_w:
+    case CODE_FOR_lsx_vpackod_b:
+    case CODE_FOR_lsx_vpackod_h:
+    case CODE_FOR_lsx_vpackod_w:
+    case CODE_FOR_lsx_vpickev_b:
+    case CODE_FOR_lsx_vpickev_h:
+    case CODE_FOR_lsx_vpickev_w:
+    case CODE_FOR_lsx_vpickod_b:
+    case CODE_FOR_lsx_vpickod_h:
+    case CODE_FOR_lsx_vpickod_w:
+      /* Swap the operands 1 and 2 for interleave operations.  Built-ins follow
+	 convention of ISA, which have op1 as higher component and op2 as lower
+	 component.  However, the VEC_PERM op in tree and vec_concat in RTL
+	 expects first operand to be lower component, because of which this
+	 swap is needed for builtins.  */
+      gcc_assert (has_target_p && nops == 3);
+      std::swap (ops[1], ops[2]);
+      break;
+
+    case CODE_FOR_lsx_vslli_b:
+    case CODE_FOR_lsx_vslli_h:
+    case CODE_FOR_lsx_vslli_w:
+    case CODE_FOR_lsx_vslli_d:
+    case CODE_FOR_lsx_vsrai_b:
+    case CODE_FOR_lsx_vsrai_h:
+    case CODE_FOR_lsx_vsrai_w:
+    case CODE_FOR_lsx_vsrai_d:
+    case CODE_FOR_lsx_vsrli_b:
+    case CODE_FOR_lsx_vsrli_h:
+    case CODE_FOR_lsx_vsrli_w:
+    case CODE_FOR_lsx_vsrli_d:
+      gcc_assert (has_target_p && nops == 3);
+      if (CONST_INT_P (ops[2].value))
+	{
+	  rangelo = 0;
+	  rangehi = GET_MODE_UNIT_BITSIZE (ops[0].mode) - 1;
+	  if (IN_RANGE (INTVAL (ops[2].value), rangelo, rangehi))
+	    {
+	      ops[2].mode = ops[0].mode;
+	      ops[2].value = loongarch_gen_const_int_vector (ops[2].mode,
+							     INTVAL (ops[2].value));
+	    }
+	  else
+	    error_opno = 2;
+	}
+      break;
+
+    case CODE_FOR_lsx_vinsgr2vr_b:
+    case CODE_FOR_lsx_vinsgr2vr_h:
+    case CODE_FOR_lsx_vinsgr2vr_w:
+    case CODE_FOR_lsx_vinsgr2vr_d:
+      /* Map the built-ins to insert operations.  We need to swap operands,
+	 fix up the mode for the element being inserted, and generate
+	 a bit mask for vec_merge.  */
+      gcc_assert (has_target_p && nops == 4);
+      std::swap (ops[1], ops[2]);
+      imode = GET_MODE_INNER (ops[0].mode);
+      ops[1].value = lowpart_subreg (imode, ops[1].value, ops[1].mode);
+      ops[1].mode = imode;
+      rangelo = 0;
+      rangehi = GET_MODE_NUNITS (ops[0].mode) - 1;
+      if (CONST_INT_P (ops[3].value)
+	  && IN_RANGE (INTVAL (ops[3].value), rangelo, rangehi))
+	ops[3].value = GEN_INT (1 << INTVAL (ops[3].value));
+      else
+	error_opno = 2;
+      break;
+
+      /* Map the built-ins to element insert operations.  We need to swap
+	 operands and generate a bit mask.  */
+      gcc_assert (has_target_p && nops == 4);
+      std::swap (ops[1], ops[2]);
+      std::swap (ops[1], ops[3]);
+      rangelo = 0;
+      rangehi = GET_MODE_NUNITS (ops[0].mode) - 1;
+      if (CONST_INT_P (ops[3].value)
+	  && IN_RANGE (INTVAL (ops[3].value), rangelo, rangehi))
+	ops[3].value = GEN_INT (1 << INTVAL (ops[3].value));
+      else
+	error_opno = 2;
+      break;
+
+    case CODE_FOR_lsx_vshuf4i_b:
+    case CODE_FOR_lsx_vshuf4i_h:
+    case CODE_FOR_lsx_vshuf4i_w:
+    case CODE_FOR_lsx_vshuf4i_w_f:
+      gcc_assert (has_target_p && nops == 3);
+      ops[2].value = loongarch_gen_const_int_vector_shuffle (ops[0].mode,
+							     INTVAL (ops[2].value));
+      break;
+
+    default:
+      break;
+  }
+
+  if (error_opno != 0)
+    {
+      error ("argument %d to the built-in must be a constant"
+	     " in range %d to %d", error_opno, rangelo, rangehi);
+      return has_target_p ? gen_reg_rtx (ops[0].mode) : const0_rtx;
+    }
+  else if (!maybe_expand_insn (icode, nops, ops))
     {
       error ("invalid argument to built-in function");
       return has_target_p ? gen_reg_rtx (ops[0].mode) : const0_rtx;
@@ -352,6 +1794,50 @@ loongarch_expand_builtin_direct (enum insn_code icode, rtx target, tree exp,
   return loongarch_expand_builtin_insn (icode, opno, ops, has_target_p);
 }
 
+/* Expand an LSX built-in for a compare and branch instruction specified by
+   ICODE, set a general-purpose register to 1 if the branch was taken,
+   0 otherwise.  */
+
+static rtx
+loongarch_expand_builtin_lsx_test_branch (enum insn_code icode, tree exp)
+{
+  struct expand_operand ops[3];
+  rtx_insn *cbranch;
+  rtx_code_label *true_label, *done_label;
+  rtx cmp_result;
+
+  true_label = gen_label_rtx ();
+  done_label = gen_label_rtx ();
+
+  create_input_operand (&ops[0], true_label, TYPE_MODE (TREE_TYPE (exp)));
+  loongarch_prepare_builtin_arg (&ops[1], exp, 0);
+  create_fixed_operand (&ops[2], const0_rtx);
+
+  /* Make sure that the operand 1 is a REG.  */
+  if (GET_CODE (ops[1].value) != REG)
+    ops[1].value = force_reg (ops[1].mode, ops[1].value);
+
+  if ((cbranch = maybe_gen_insn (icode, 3, ops)) == NULL_RTX)
+    error ("failed to expand built-in function");
+
+  cmp_result = gen_reg_rtx (SImode);
+
+  /* First assume that CMP_RESULT is false.  */
+  loongarch_emit_move (cmp_result, const0_rtx);
+
+  /* Branch to TRUE_LABEL if CBRANCH is taken and DONE_LABEL otherwise.  */
+  emit_jump_insn (cbranch);
+  emit_jump_insn (gen_jump (done_label));
+  emit_barrier ();
+
+  /* Set CMP_RESULT to true if the branch was taken.  */
+  emit_label (true_label);
+  loongarch_emit_move (cmp_result, const1_rtx);
+
+  emit_label (done_label);
+  return cmp_result;
+}
+
 /* Implement TARGET_EXPAND_BUILTIN.  */
 
 rtx
@@ -372,10 +1858,14 @@ loongarch_expand_builtin (tree exp, rtx target, rtx subtarget ATTRIBUTE_UNUSED,
   switch (d->builtin_type)
     {
     case LARCH_BUILTIN_DIRECT:
+    case LARCH_BUILTIN_LSX:
       return loongarch_expand_builtin_direct (d->icode, target, exp, true);
 
     case LARCH_BUILTIN_DIRECT_NO_TARGET:
       return loongarch_expand_builtin_direct (d->icode, target, exp, false);
+
+    case LARCH_BUILTIN_LSX_TEST_BRANCH:
+      return loongarch_expand_builtin_lsx_test_branch (d->icode, exp);
     }
   gcc_unreachable ();
 }
diff --git a/gcc/config/loongarch/loongarch-ftypes.def b/gcc/config/loongarch/loongarch-ftypes.def
index 06d2e0519f7..1ce9d83ccab 100644
--- a/gcc/config/loongarch/loongarch-ftypes.def
+++ b/gcc/config/loongarch/loongarch-ftypes.def
@@ -1,7 +1,7 @@
 /* Definitions of prototypes for LoongArch built-in functions.
    Copyright (C) 2021-2023 Free Software Foundation, Inc.
    Contributed by Loongson Ltd.
-   Based on MIPS target for GNU ckompiler.
+   Based on MIPS target for GNU compiler.
 
 This file is part of GCC.
 
@@ -32,7 +32,7 @@ along with GCC; see the file COPYING3.  If not see
       INT for integer_type_node
       POINTER for ptr_type_node
 
-   (we don't use PTR because that's a ANSI-compatibillity macro).
+   (we don't use PTR because that's a ANSI-compatibility macro).
 
    Please keep this list lexicographically sorted by the LIST argument.  */
 
@@ -63,3 +63,396 @@ DEF_LARCH_FTYPE (3, (VOID, USI, USI, SI))
 DEF_LARCH_FTYPE (3, (VOID, USI, UDI, SI))
 DEF_LARCH_FTYPE (3, (USI, USI, USI, USI))
 DEF_LARCH_FTYPE (3, (UDI, UDI, UDI, USI))
+
+DEF_LARCH_FTYPE (1, (DF, DF))
+DEF_LARCH_FTYPE (2, (DF, DF, DF))
+DEF_LARCH_FTYPE (1, (DF, V2DF))
+
+DEF_LARCH_FTYPE (1, (DI, DI))
+DEF_LARCH_FTYPE (1, (DI, SI))
+DEF_LARCH_FTYPE (1, (DI, UQI))
+DEF_LARCH_FTYPE (2, (DI, DI, DI))
+DEF_LARCH_FTYPE (2, (DI, DI, SI))
+DEF_LARCH_FTYPE (3, (DI, DI, SI, SI))
+DEF_LARCH_FTYPE (3, (DI, DI, USI, USI))
+DEF_LARCH_FTYPE (3, (DI, DI, DI, QI))
+DEF_LARCH_FTYPE (3, (DI, DI, V2HI, V2HI))
+DEF_LARCH_FTYPE (3, (DI, DI, V4QI, V4QI))
+DEF_LARCH_FTYPE (2, (DI, POINTER, SI))
+DEF_LARCH_FTYPE (2, (DI, SI, SI))
+DEF_LARCH_FTYPE (2, (DI, USI, USI))
+
+DEF_LARCH_FTYPE (2, (DI, V2DI, UQI))
+
+DEF_LARCH_FTYPE (2, (INT, DF, DF))
+DEF_LARCH_FTYPE (2, (INT, SF, SF))
+
+DEF_LARCH_FTYPE (2, (INT, V2SF, V2SF))
+DEF_LARCH_FTYPE (4, (INT, V2SF, V2SF, V2SF, V2SF))
+
+DEF_LARCH_FTYPE (1, (SF, SF))
+DEF_LARCH_FTYPE (2, (SF, SF, SF))
+DEF_LARCH_FTYPE (1, (SF, V2SF))
+DEF_LARCH_FTYPE (1, (SF, V4SF))
+
+DEF_LARCH_FTYPE (2, (SI, POINTER, SI))
+DEF_LARCH_FTYPE (1, (SI, SI))
+DEF_LARCH_FTYPE (1, (SI, UDI))
+DEF_LARCH_FTYPE (2, (QI, QI, QI))
+DEF_LARCH_FTYPE (2, (HI, HI, HI))
+DEF_LARCH_FTYPE (3, (SI, SI, SI, SI))
+DEF_LARCH_FTYPE (3, (SI, SI, SI, QI))
+DEF_LARCH_FTYPE (1, (SI, UQI))
+DEF_LARCH_FTYPE (1, (SI, UV16QI))
+DEF_LARCH_FTYPE (1, (SI, UV2DI))
+DEF_LARCH_FTYPE (1, (SI, UV4SI))
+DEF_LARCH_FTYPE (1, (SI, UV8HI))
+DEF_LARCH_FTYPE (2, (SI, V16QI, UQI))
+DEF_LARCH_FTYPE (1, (SI, V2HI))
+DEF_LARCH_FTYPE (2, (SI, V2HI, V2HI))
+DEF_LARCH_FTYPE (1, (SI, V4QI))
+DEF_LARCH_FTYPE (2, (SI, V4QI, V4QI))
+DEF_LARCH_FTYPE (2, (SI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (SI, V8HI, UQI))
+DEF_LARCH_FTYPE (1, (SI, VOID))
+
+DEF_LARCH_FTYPE (2, (UDI, UDI, UDI))
+DEF_LARCH_FTYPE (2, (UDI, UV2SI, UV2SI))
+DEF_LARCH_FTYPE (2, (UDI, V2DI, UQI))
+
+DEF_LARCH_FTYPE (2, (USI, V16QI, UQI))
+DEF_LARCH_FTYPE (2, (USI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (USI, V8HI, UQI))
+DEF_LARCH_FTYPE (1, (USI, VOID))
+
+DEF_LARCH_FTYPE (2, (UV16QI, UV16QI, UQI))
+DEF_LARCH_FTYPE (2, (UV16QI, UV16QI, USI))
+DEF_LARCH_FTYPE (2, (UV16QI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (3, (UV16QI, UV16QI, UV16QI, UQI))
+DEF_LARCH_FTYPE (3, (UV16QI, UV16QI, UV16QI, USI))
+DEF_LARCH_FTYPE (3, (UV16QI, UV16QI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (UV16QI, UV16QI, V16QI))
+
+DEF_LARCH_FTYPE (2, (UV2DI, UV2DI, UQI))
+DEF_LARCH_FTYPE (2, (UV2DI, UV2DI, UV2DI))
+DEF_LARCH_FTYPE (3, (UV2DI, UV2DI, UV2DI, UQI))
+DEF_LARCH_FTYPE (3, (UV2DI, UV2DI, UV2DI, UV2DI))
+DEF_LARCH_FTYPE (3, (UV2DI, UV2DI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (2, (UV2DI, UV2DI, V2DI))
+DEF_LARCH_FTYPE (2, (UV2DI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (1, (UV2DI, V2DF))
+
+DEF_LARCH_FTYPE (2, (UV2SI, UV2SI, UQI))
+DEF_LARCH_FTYPE (2, (UV2SI, UV2SI, UV2SI))
+
+DEF_LARCH_FTYPE (2, (UV4HI, UV4HI, UQI))
+DEF_LARCH_FTYPE (2, (UV4HI, UV4HI, USI))
+DEF_LARCH_FTYPE (2, (UV4HI, UV4HI, UV4HI))
+DEF_LARCH_FTYPE (3, (UV4HI, UV4HI, UV4HI, UQI))
+DEF_LARCH_FTYPE (3, (UV4HI, UV4HI, UV4HI, USI))
+DEF_LARCH_FTYPE (1, (UV4HI, UV8QI))
+DEF_LARCH_FTYPE (2, (UV4HI, UV8QI, UV8QI))
+
+DEF_LARCH_FTYPE (2, (UV4SI, UV4SI, UQI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, UV4SI, UQI))
+DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV4SI, V4SI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (1, (UV4SI, V4SF))
+
+DEF_LARCH_FTYPE (2, (UV8HI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (UV8HI, UV8HI, UQI))
+DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (UV8HI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, UV8HI, UQI))
+DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (UV8HI, UV8HI, V8HI))
+
+
+
+DEF_LARCH_FTYPE (2, (UV8QI, UV4HI, UV4HI))
+DEF_LARCH_FTYPE (1, (UV8QI, UV8QI))
+DEF_LARCH_FTYPE (2, (UV8QI, UV8QI, UV8QI))
+
+DEF_LARCH_FTYPE (2, (V16QI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (2, (V16QI, CVPOINTER, DI))
+DEF_LARCH_FTYPE (1, (V16QI, HI))
+DEF_LARCH_FTYPE (1, (V16QI, SI))
+DEF_LARCH_FTYPE (2, (V16QI, UV16QI, UQI))
+DEF_LARCH_FTYPE (2, (V16QI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (1, (V16QI, V16QI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, QI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, SI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, USI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, UQI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, UQI, SI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, UQI, V16QI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, V16QI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, V16QI, SI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, V16QI, UQI))
+DEF_LARCH_FTYPE (4, (V16QI, V16QI, V16QI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, V16QI, USI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, V16QI, V16QI))
+
+
+DEF_LARCH_FTYPE (1, (V2DF, DF))
+DEF_LARCH_FTYPE (1, (V2DF, UV2DI))
+DEF_LARCH_FTYPE (1, (V2DF, V2DF))
+DEF_LARCH_FTYPE (2, (V2DF, V2DF, V2DF))
+DEF_LARCH_FTYPE (3, (V2DF, V2DF, V2DF, V2DF))
+DEF_LARCH_FTYPE (2, (V2DF, V2DF, V2DI))
+DEF_LARCH_FTYPE (1, (V2DF, V2DI))
+DEF_LARCH_FTYPE (1, (V2DF, V4SF))
+DEF_LARCH_FTYPE (1, (V2DF, V4SI))
+
+DEF_LARCH_FTYPE (2, (V2DI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (1, (V2DI, DI))
+DEF_LARCH_FTYPE (1, (V2DI, HI))
+DEF_LARCH_FTYPE (2, (V2DI, UV2DI, UQI))
+DEF_LARCH_FTYPE (2, (V2DI, UV2DI, UV2DI))
+DEF_LARCH_FTYPE (2, (V2DI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (1, (V2DI, V2DF))
+DEF_LARCH_FTYPE (2, (V2DI, V2DF, V2DF))
+DEF_LARCH_FTYPE (1, (V2DI, V2DI))
+DEF_LARCH_FTYPE (1, (UV2DI, UV2DI))
+DEF_LARCH_FTYPE (2, (V2DI, V2DI, QI))
+DEF_LARCH_FTYPE (2, (V2DI, V2DI, SI))
+DEF_LARCH_FTYPE (2, (V2DI, V2DI, UQI))
+DEF_LARCH_FTYPE (2, (V2DI, V2DI, USI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UQI, DI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UQI, V2DI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (2, (V2DI, V2DI, V2DI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, V2DI, SI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, V2DI, UQI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, V2DI, USI))
+DEF_LARCH_FTYPE (4, (V2DI, V2DI, V2DI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, V2DI, V2DI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, V4SI, V4SI))
+DEF_LARCH_FTYPE (2, (V2DI, V4SI, V4SI))
+
+DEF_LARCH_FTYPE (1, (V2HI, SI))
+DEF_LARCH_FTYPE (2, (V2HI, SI, SI))
+DEF_LARCH_FTYPE (3, (V2HI, SI, SI, SI))
+DEF_LARCH_FTYPE (1, (V2HI, V2HI))
+DEF_LARCH_FTYPE (2, (V2HI, V2HI, SI))
+DEF_LARCH_FTYPE (2, (V2HI, V2HI, V2HI))
+DEF_LARCH_FTYPE (1, (V2HI, V4QI))
+DEF_LARCH_FTYPE (2, (V2HI, V4QI, V2HI))
+
+DEF_LARCH_FTYPE (2, (V2SF, SF, SF))
+DEF_LARCH_FTYPE (1, (V2SF, V2SF))
+DEF_LARCH_FTYPE (2, (V2SF, V2SF, V2SF))
+DEF_LARCH_FTYPE (3, (V2SF, V2SF, V2SF, INT))
+DEF_LARCH_FTYPE (4, (V2SF, V2SF, V2SF, V2SF, V2SF))
+
+DEF_LARCH_FTYPE (2, (V2SI, V2SI, UQI))
+DEF_LARCH_FTYPE (2, (V2SI, V2SI, V2SI))
+DEF_LARCH_FTYPE (2, (V2SI, V4HI, V4HI))
+
+DEF_LARCH_FTYPE (2, (V4HI, V2SI, V2SI))
+DEF_LARCH_FTYPE (2, (V4HI, V4HI, UQI))
+DEF_LARCH_FTYPE (2, (V4HI, V4HI, USI))
+DEF_LARCH_FTYPE (2, (V4HI, V4HI, V4HI))
+DEF_LARCH_FTYPE (3, (V4HI, V4HI, V4HI, UQI))
+DEF_LARCH_FTYPE (3, (V4HI, V4HI, V4HI, USI))
+
+DEF_LARCH_FTYPE (1, (V4QI, SI))
+DEF_LARCH_FTYPE (2, (V4QI, V2HI, V2HI))
+DEF_LARCH_FTYPE (1, (V4QI, V4QI))
+DEF_LARCH_FTYPE (2, (V4QI, V4QI, SI))
+DEF_LARCH_FTYPE (2, (V4QI, V4QI, V4QI))
+
+DEF_LARCH_FTYPE (1, (V4SF, SF))
+DEF_LARCH_FTYPE (1, (V4SF, UV4SI))
+DEF_LARCH_FTYPE (2, (V4SF, V2DF, V2DF))
+DEF_LARCH_FTYPE (1, (V4SF, V4SF))
+DEF_LARCH_FTYPE (2, (V4SF, V4SF, V4SF))
+DEF_LARCH_FTYPE (3, (V4SF, V4SF, V4SF, V4SF))
+DEF_LARCH_FTYPE (2, (V4SF, V4SF, V4SI))
+DEF_LARCH_FTYPE (1, (V4SF, V4SI))
+DEF_LARCH_FTYPE (1, (V4SF, V8HI))
+
+DEF_LARCH_FTYPE (2, (V4SI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (1, (V4SI, HI))
+DEF_LARCH_FTYPE (1, (V4SI, SI))
+DEF_LARCH_FTYPE (2, (V4SI, UV4SI, UQI))
+DEF_LARCH_FTYPE (2, (V4SI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (2, (V4SI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (V4SI, V2DF, V2DF))
+DEF_LARCH_FTYPE (1, (V4SI, V4SF))
+DEF_LARCH_FTYPE (2, (V4SI, V4SF, V4SF))
+DEF_LARCH_FTYPE (1, (V4SI, V4SI))
+DEF_LARCH_FTYPE (2, (V4SI, V4SI, QI))
+DEF_LARCH_FTYPE (2, (V4SI, V4SI, SI))
+DEF_LARCH_FTYPE (2, (V4SI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (V4SI, V4SI, USI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, UQI, SI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, UQI, V4SI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (V4SI, V4SI, V4SI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, V4SI, SI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, V4SI, UQI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, V4SI, USI))
+DEF_LARCH_FTYPE (4, (V4SI, V4SI, V4SI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, V4SI, V4SI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, V8HI, V8HI))
+DEF_LARCH_FTYPE (2, (V4SI, V8HI, V8HI))
+
+DEF_LARCH_FTYPE (2, (V8HI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (1, (V8HI, HI))
+DEF_LARCH_FTYPE (1, (V8HI, SI))
+DEF_LARCH_FTYPE (2, (V8HI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (V8HI, UV8HI, UQI))
+DEF_LARCH_FTYPE (2, (V8HI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (V8HI, V16QI, V16QI))
+DEF_LARCH_FTYPE (2, (V8HI, V4SF, V4SF))
+DEF_LARCH_FTYPE (1, (V8HI, V8HI))
+DEF_LARCH_FTYPE (2, (V8HI, V8HI, QI))
+DEF_LARCH_FTYPE (2, (V8HI, V8HI, SI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, SI, UQI))
+DEF_LARCH_FTYPE (2, (V8HI, V8HI, UQI))
+DEF_LARCH_FTYPE (2, (V8HI, V8HI, USI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, UQI, SI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, UQI, V8HI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, V16QI, V16QI))
+DEF_LARCH_FTYPE (2, (V8HI, V8HI, V8HI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, V8HI, SI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, V8HI, UQI))
+DEF_LARCH_FTYPE (4, (V8HI, V8HI, V8HI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, V8HI, USI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, V8HI, V8HI))
+
+DEF_LARCH_FTYPE (2, (V8QI, V4HI, V4HI))
+DEF_LARCH_FTYPE (1, (V8QI, V8QI))
+DEF_LARCH_FTYPE (2, (V8QI, V8QI, V8QI))
+
+DEF_LARCH_FTYPE (2, (VOID, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (VOID, SI, SI))
+DEF_LARCH_FTYPE (2, (VOID, UQI, SI))
+DEF_LARCH_FTYPE (2, (VOID, USI, UQI))
+DEF_LARCH_FTYPE (1, (VOID, UHI))
+DEF_LARCH_FTYPE (3, (VOID, V16QI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V16QI, CVPOINTER, DI))
+DEF_LARCH_FTYPE (3, (VOID, V2DF, POINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V2DI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (2, (VOID, V2HI, V2HI))
+DEF_LARCH_FTYPE (2, (VOID, V4QI, V4QI))
+DEF_LARCH_FTYPE (3, (VOID, V4SF, POINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V4SI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V8HI, CVPOINTER, SI))
+
+DEF_LARCH_FTYPE (1, (V8HI, V16QI))
+DEF_LARCH_FTYPE (1, (V4SI, V16QI))
+DEF_LARCH_FTYPE (1, (V2DI, V16QI))
+DEF_LARCH_FTYPE (1, (V4SI, V8HI))
+DEF_LARCH_FTYPE (1, (V2DI, V8HI))
+DEF_LARCH_FTYPE (1, (V2DI, V4SI))
+DEF_LARCH_FTYPE (1, (UV8HI, V16QI))
+DEF_LARCH_FTYPE (1, (UV4SI, V16QI))
+DEF_LARCH_FTYPE (1, (UV2DI, V16QI))
+DEF_LARCH_FTYPE (1, (UV4SI, V8HI))
+DEF_LARCH_FTYPE (1, (UV2DI, V8HI))
+DEF_LARCH_FTYPE (1, (UV2DI, V4SI))
+DEF_LARCH_FTYPE (1, (UV8HI, UV16QI))
+DEF_LARCH_FTYPE (1, (UV4SI, UV16QI))
+DEF_LARCH_FTYPE (1, (UV2DI, UV16QI))
+DEF_LARCH_FTYPE (1, (UV4SI, UV8HI))
+DEF_LARCH_FTYPE (1, (UV2DI, UV8HI))
+DEF_LARCH_FTYPE (1, (UV2DI, UV4SI))
+DEF_LARCH_FTYPE (2, (UV8HI, V16QI, V16QI))
+DEF_LARCH_FTYPE (2, (UV4SI, V8HI, V8HI))
+DEF_LARCH_FTYPE (2, (UV2DI, V4SI, V4SI))
+DEF_LARCH_FTYPE (2, (V8HI, V16QI, UQI))
+DEF_LARCH_FTYPE (2, (V4SI, V8HI, UQI))
+DEF_LARCH_FTYPE (2, (V2DI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (UV8HI, UV16QI, UQI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV8HI, UQI))
+DEF_LARCH_FTYPE (2, (UV2DI, UV4SI, UQI))
+DEF_LARCH_FTYPE (2, (V16QI, V8HI, V8HI))
+DEF_LARCH_FTYPE (2, (V8HI, V4SI, V4SI))
+DEF_LARCH_FTYPE (2, (V4SI, V2DI, V2DI))
+DEF_LARCH_FTYPE (2, (UV16QI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (UV8HI, UV4SI, UV4SI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV2DI, UV2DI))
+DEF_LARCH_FTYPE (2, (V16QI, V8HI, UQI))
+DEF_LARCH_FTYPE (2, (V8HI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (V4SI, V2DI, UQI))
+DEF_LARCH_FTYPE (2, (UV16QI, UV8HI, UQI))
+DEF_LARCH_FTYPE (2, (UV8HI, UV4SI, UQI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV2DI, UQI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, DI))
+DEF_LARCH_FTYPE (2, (V16QI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UQI, UQI))
+DEF_LARCH_FTYPE (2, (V4SF, V2DI, V2DI))
+DEF_LARCH_FTYPE (1, (V2DI, V4SF))
+DEF_LARCH_FTYPE (2, (V2DI, UQI, USI))
+DEF_LARCH_FTYPE (2, (V2DI, UQI, UQI))
+DEF_LARCH_FTYPE (4, (VOID, SI, UQI, V16QI, CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, SI, UQI, V8HI, CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, SI, UQI, V4SI, CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, SI, UQI, V2DI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V16QI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V8HI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V4SI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V2DI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V8HI, UV16QI, V16QI))
+DEF_LARCH_FTYPE (2, (V16QI, V16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (UV16QI, V16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (V8HI, V8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (UV8HI, V8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (V4SI, V4SI, UV4SI))
+DEF_LARCH_FTYPE (2, (UV4SI, V4SI, UV4SI))
+DEF_LARCH_FTYPE (2, (V4SI, V16QI, V16QI))
+DEF_LARCH_FTYPE (2, (V4SI, UV16QI, V16QI))
+DEF_LARCH_FTYPE (2, (UV4SI, UV16QI, UV16QI))
+DEF_LARCH_FTYPE (2, (V2DI, V2DI, UV2DI))
+DEF_LARCH_FTYPE (2, (UV2DI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (2, (V4SI, UV8HI, V8HI))
+DEF_LARCH_FTYPE (2, (V2DI, UV4SI, V4SI))
+DEF_LARCH_FTYPE (2, (V2DI, UV2DI, V2DI))
+DEF_LARCH_FTYPE (2, (V2DI, V8HI, V8HI))
+DEF_LARCH_FTYPE (2, (V2DI, UV8HI, V8HI))
+DEF_LARCH_FTYPE (2, (UV2DI, V2DI, UV2DI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, UV8HI, V8HI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UV2DI, V2DI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UV4SI, V4SI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, V8HI, V8HI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, UV8HI, V8HI))
+DEF_LARCH_FTYPE (3, (UV2DI, UV2DI, UV8HI, UV8HI))
+DEF_LARCH_FTYPE (3, (V8HI, V8HI, UV16QI, V16QI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, V16QI, V16QI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, UV16QI, V16QI))
+DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, UV16QI, UV16QI))
+
+DEF_LARCH_FTYPE(4,(VOID,V16QI,CVPOINTER,SI,UQI))
+DEF_LARCH_FTYPE(4,(VOID,V8HI,CVPOINTER,SI,UQI))
+DEF_LARCH_FTYPE(4,(VOID,V4SI,CVPOINTER,SI,UQI))
+DEF_LARCH_FTYPE(4,(VOID,V2DI,CVPOINTER,SI,UQI))
+
+DEF_LARCH_FTYPE (2, (DI, V16QI, UQI))
+DEF_LARCH_FTYPE (2, (DI, V8HI, UQI))
+DEF_LARCH_FTYPE (2, (DI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (UDI, V16QI, UQI))
+DEF_LARCH_FTYPE (2, (UDI, V8HI, UQI))
+DEF_LARCH_FTYPE (2, (UDI, V4SI, UQI))
+
+DEF_LARCH_FTYPE (3, (UV16QI, UV16QI, V16QI, USI))
+DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, V8HI, USI))
+DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, V4SI, USI))
+DEF_LARCH_FTYPE (3, (UV2DI, UV2DI, V2DI, USI))
+
+DEF_LARCH_FTYPE (1, (BOOLEAN,V16QI))
+DEF_LARCH_FTYPE(2,(V16QI,CVPOINTER,CVPOINTER))
+DEF_LARCH_FTYPE(3,(VOID,V16QI,CVPOINTER,CVPOINTER))
+
+DEF_LARCH_FTYPE (3, (V16QI, V16QI, SI, UQI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, SI, UQI))
+DEF_LARCH_FTYPE (3, (V2DI, V2DI, DI, UQI))
+DEF_LARCH_FTYPE (3, (V4SI, V4SI, SI, UQI))
diff --git a/gcc/config/loongarch/lsxintrin.h b/gcc/config/loongarch/lsxintrin.h
new file mode 100644
index 00000000000..ec42069904d
--- /dev/null
+++ b/gcc/config/loongarch/lsxintrin.h
@@ -0,0 +1,5181 @@
+/* LARCH Loongson SX intrinsics include file.
+
+   Copyright (C) 2018 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef _GCC_LOONGSON_SXINTRIN_H
+#define _GCC_LOONGSON_SXINTRIN_H 1
+
+#if defined(__loongarch_sx)
+typedef signed char v16i8 __attribute__ ((vector_size(16), aligned(16)));
+typedef signed char v16i8_b __attribute__ ((vector_size(16), aligned(1)));
+typedef unsigned char v16u8 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned char v16u8_b __attribute__ ((vector_size(16), aligned(1)));
+typedef short v8i16 __attribute__ ((vector_size(16), aligned(16)));
+typedef short v8i16_h __attribute__ ((vector_size(16), aligned(2)));
+typedef unsigned short v8u16 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned short v8u16_h __attribute__ ((vector_size(16), aligned(2)));
+typedef int v4i32 __attribute__ ((vector_size(16), aligned(16)));
+typedef int v4i32_w __attribute__ ((vector_size(16), aligned(4)));
+typedef unsigned int v4u32 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned int v4u32_w __attribute__ ((vector_size(16), aligned(4)));
+typedef long long v2i64 __attribute__ ((vector_size(16), aligned(16)));
+typedef long long v2i64_d __attribute__ ((vector_size(16), aligned(8)));
+typedef unsigned long long v2u64 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned long long v2u64_d __attribute__ ((vector_size(16), aligned(8)));
+typedef float v4f32 __attribute__ ((vector_size(16), aligned(16)));
+typedef float v4f32_w __attribute__ ((vector_size(16), aligned(4)));
+typedef double v2f64 __attribute__ ((vector_size(16), aligned(16)));
+typedef double v2f64_d __attribute__ ((vector_size(16), aligned(8)));
+
+typedef long long __m128i __attribute__ ((__vector_size__ (16), __may_alias__));
+typedef float __m128 __attribute__ ((__vector_size__ (16), __may_alias__));
+typedef double __m128d __attribute__ ((__vector_size__ (16), __may_alias__));
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsll_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsll_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsll_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsll_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsll_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsll_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsll_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsll_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vslli_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vslli_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vslli_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vslli_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vslli_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslli_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vslli_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vslli_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsra_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsra_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsra_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsra_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsra_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsra_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsra_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsra_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vsrai_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsrai_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vsrai_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsrai_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vsrai_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsrai_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vsrai_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vsrai_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrar_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrar_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrar_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrar_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrar_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrar_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrar_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrar_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vsrari_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsrari_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vsrari_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsrari_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vsrari_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsrari_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vsrari_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vsrari_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrl_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrl_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrl_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrl_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrl_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrl_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrl_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrl_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vsrli_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsrli_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vsrli_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsrli_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vsrli_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsrli_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vsrli_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vsrli_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlr_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlr_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlr_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlr_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlr_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlr_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlr_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlr_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vsrlri_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsrlri_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vsrlri_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsrlri_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vsrlri_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsrlri_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vsrlri_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vsrlri_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitclr_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitclr_b ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitclr_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitclr_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitclr_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitclr_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitclr_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitclr_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vbitclri_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vbitclri_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UQI.  */
+#define __lsx_vbitclri_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vbitclri_h ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UQI.  */
+#define __lsx_vbitclri_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vbitclri_w ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UQI.  */
+#define __lsx_vbitclri_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vbitclri_d ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitset_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitset_b ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitset_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitset_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitset_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitset_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitset_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitset_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vbitseti_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vbitseti_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UQI.  */
+#define __lsx_vbitseti_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vbitseti_h ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UQI.  */
+#define __lsx_vbitseti_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vbitseti_w ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UQI.  */
+#define __lsx_vbitseti_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vbitseti_d ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitrev_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitrev_b ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitrev_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitrev_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitrev_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitrev_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitrev_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vbitrev_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vbitrevi_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vbitrevi_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UQI.  */
+#define __lsx_vbitrevi_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vbitrevi_h ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UQI.  */
+#define __lsx_vbitrevi_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vbitrevi_w ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UQI.  */
+#define __lsx_vbitrevi_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vbitrevi_d ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadd_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadd_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadd_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadd_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadd_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadd_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadd_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadd_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vaddi_bu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vaddi_bu ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vaddi_hu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vaddi_hu ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vaddi_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vaddi_wu ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vaddi_du(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vaddi_du ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsub_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsub_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsub_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsub_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsub_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsub_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsub_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsub_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vsubi_bu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsubi_bu ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vsubi_hu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsubi_hu ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vsubi_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsubi_wu ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vsubi_du(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsubi_du ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V16QI, V16QI, QI.  */
+#define __lsx_vmaxi_b(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V8HI, V8HI, QI.  */
+#define __lsx_vmaxi_h(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V4SI, V4SI, QI.  */
+#define __lsx_vmaxi_w(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V2DI, V2DI, QI.  */
+#define __lsx_vmaxi_d(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmax_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmax_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vmaxi_bu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_bu ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UQI.  */
+#define __lsx_vmaxi_hu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_hu ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UQI.  */
+#define __lsx_vmaxi_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_wu ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UQI.  */
+#define __lsx_vmaxi_du(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmaxi_du ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V16QI, V16QI, QI.  */
+#define __lsx_vmini_b(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V8HI, V8HI, QI.  */
+#define __lsx_vmini_h(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V4SI, V4SI, QI.  */
+#define __lsx_vmini_w(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V2DI, V2DI, QI.  */
+#define __lsx_vmini_d(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmin_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmin_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vmini_bu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_bu ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UQI.  */
+#define __lsx_vmini_hu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_hu ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UQI.  */
+#define __lsx_vmini_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_wu ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UQI.  */
+#define __lsx_vmini_du(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vmini_du ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vseq_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vseq_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vseq_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vseq_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vseq_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vseq_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vseq_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vseq_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V16QI, V16QI, QI.  */
+#define __lsx_vseqi_b(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vseqi_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V8HI, V8HI, QI.  */
+#define __lsx_vseqi_h(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vseqi_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V4SI, V4SI, QI.  */
+#define __lsx_vseqi_w(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vseqi_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V2DI, V2DI, QI.  */
+#define __lsx_vseqi_d(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vseqi_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V16QI, V16QI, QI.  */
+#define __lsx_vslti_b(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V8HI, V8HI, QI.  */
+#define __lsx_vslti_h(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V4SI, V4SI, QI.  */
+#define __lsx_vslti_w(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V2DI, V2DI, QI.  */
+#define __lsx_vslti_d(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vslt_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vslt_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, UV16QI, UQI.  */
+#define __lsx_vslti_bu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_bu ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, UV8HI, UQI.  */
+#define __lsx_vslti_hu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_hu ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, UV4SI, UQI.  */
+#define __lsx_vslti_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_wu ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UQI.  */
+#define __lsx_vslti_du(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslti_du ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V16QI, V16QI, QI.  */
+#define __lsx_vslei_b(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V8HI, V8HI, QI.  */
+#define __lsx_vslei_h(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V4SI, V4SI, QI.  */
+#define __lsx_vslei_w(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, si5.  */
+/* Data types in instruction templates:  V2DI, V2DI, QI.  */
+#define __lsx_vslei_d(/*__m128i*/ _1, /*si5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsle_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsle_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, UV16QI, UQI.  */
+#define __lsx_vslei_bu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_bu ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, UV8HI, UQI.  */
+#define __lsx_vslei_hu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_hu ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, UV4SI, UQI.  */
+#define __lsx_vslei_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_wu ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UQI.  */
+#define __lsx_vslei_du(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vslei_du ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vsat_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vsat_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vsat_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vsat_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vsat_bu(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_bu ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UQI.  */
+#define __lsx_vsat_hu(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_hu ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UQI.  */
+#define __lsx_vsat_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_wu ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UQI.  */
+#define __lsx_vsat_du(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vsat_du ((v2u64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadda_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadda_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadda_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadda_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadda_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadda_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadda_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadda_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsadd_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsadd_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavg_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavg_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vavgr_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vavgr_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssub_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssub_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vabsd_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vabsd_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmul_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmul_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmul_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmul_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmul_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmul_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmul_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmul_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmadd_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmadd_b ((v16i8)_1, (v16i8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmadd_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmadd_h ((v8i16)_1, (v8i16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmadd_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmadd_w ((v4i32)_1, (v4i32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmadd_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmadd_d ((v2i64)_1, (v2i64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmsub_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmsub_b ((v16i8)_1, (v16i8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmsub_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmsub_h ((v8i16)_1, (v8i16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmsub_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmsub_w ((v4i32)_1, (v4i32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmsub_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmsub_d ((v2i64)_1, (v2i64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vdiv_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vdiv_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_hu_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_hu_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_wu_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_wu_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_du_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_du_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_hu_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_hu_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_wu_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_wu_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_du_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_du_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmod_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmod_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, rk.  */
+/* Data types in instruction templates:  V16QI, V16QI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplve_b (__m128i _1, int _2)
+{
+  return (__m128i)__builtin_lsx_vreplve_b ((v16i8)_1, (int)_2);
+}
+
+/* Assembly instruction format:	vd, vj, rk.  */
+/* Data types in instruction templates:  V8HI, V8HI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplve_h (__m128i _1, int _2)
+{
+  return (__m128i)__builtin_lsx_vreplve_h ((v8i16)_1, (int)_2);
+}
+
+/* Assembly instruction format:	vd, vj, rk.  */
+/* Data types in instruction templates:  V4SI, V4SI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplve_w (__m128i _1, int _2)
+{
+  return (__m128i)__builtin_lsx_vreplve_w ((v4i32)_1, (int)_2);
+}
+
+/* Assembly instruction format:	vd, vj, rk.  */
+/* Data types in instruction templates:  V2DI, V2DI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplve_d (__m128i _1, int _2)
+{
+  return (__m128i)__builtin_lsx_vreplve_d ((v2i64)_1, (int)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vreplvei_b(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vreplvei_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vreplvei_h(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vreplvei_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui2.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vreplvei_w(/*__m128i*/ _1, /*ui2*/ _2) \
+  ((__m128i)__builtin_lsx_vreplvei_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui1.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vreplvei_d(/*__m128i*/ _1, /*ui1*/ _2) \
+  ((__m128i)__builtin_lsx_vreplvei_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickev_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickev_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickev_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickev_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickev_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickev_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickev_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickev_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickod_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickod_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickod_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickod_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickod_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickod_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpickod_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpickod_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvh_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvh_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvh_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvh_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvh_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvh_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvh_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvh_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvl_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvl_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvl_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvl_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvl_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvl_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vilvl_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vilvl_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackev_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackev_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackev_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackev_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackev_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackev_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackev_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackev_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackod_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackod_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackod_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackod_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackod_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackod_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpackod_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vpackod_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vshuf_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vshuf_h ((v8i16)_1, (v8i16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vshuf_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vshuf_w ((v4i32)_1, (v4i32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vshuf_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vshuf_d ((v2i64)_1, (v2i64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vand_v (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vand_v ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vandi_b(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vandi_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vor_v (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vor_v ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vori_b(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vori_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vnor_v (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vnor_v ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vnori_b(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vnori_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vxor_v (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vxor_v ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UQI.  */
+#define __lsx_vxori_b(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vxori_b ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vbitsel_v (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vbitsel_v ((v16u8)_1, (v16u8)_2, (v16u8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI, USI.  */
+#define __lsx_vbitseli_b(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vbitseli_b ((v16u8)(_1), (v16u8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V16QI, V16QI, USI.  */
+#define __lsx_vshuf4i_b(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vshuf4i_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V8HI, V8HI, USI.  */
+#define __lsx_vshuf4i_h(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vshuf4i_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V4SI, V4SI, USI.  */
+#define __lsx_vshuf4i_w(/*__m128i*/ _1, /*ui8*/ _2) \
+  ((__m128i)__builtin_lsx_vshuf4i_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, rj.  */
+/* Data types in instruction templates:  V16QI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplgr2vr_b (int _1)
+{
+  return (__m128i)__builtin_lsx_vreplgr2vr_b ((int)_1);
+}
+
+/* Assembly instruction format:	vd, rj.  */
+/* Data types in instruction templates:  V8HI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplgr2vr_h (int _1)
+{
+  return (__m128i)__builtin_lsx_vreplgr2vr_h ((int)_1);
+}
+
+/* Assembly instruction format:	vd, rj.  */
+/* Data types in instruction templates:  V4SI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplgr2vr_w (int _1)
+{
+  return (__m128i)__builtin_lsx_vreplgr2vr_w ((int)_1);
+}
+
+/* Assembly instruction format:	vd, rj.  */
+/* Data types in instruction templates:  V2DI, DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vreplgr2vr_d (long int _1)
+{
+  return (__m128i)__builtin_lsx_vreplgr2vr_d ((long int)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpcnt_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vpcnt_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpcnt_h (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vpcnt_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpcnt_w (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vpcnt_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vpcnt_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vpcnt_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclo_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclo_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclo_h (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclo_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclo_w (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclo_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclo_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclo_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclz_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclz_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclz_h (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclz_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclz_w (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclz_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vclz_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vclz_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	rd, vj, ui4.  */
+/* Data types in instruction templates:  SI, V16QI, UQI.  */
+#define __lsx_vpickve2gr_b(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((int)__builtin_lsx_vpickve2gr_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui3.  */
+/* Data types in instruction templates:  SI, V8HI, UQI.  */
+#define __lsx_vpickve2gr_h(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((int)__builtin_lsx_vpickve2gr_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui2.  */
+/* Data types in instruction templates:  SI, V4SI, UQI.  */
+#define __lsx_vpickve2gr_w(/*__m128i*/ _1, /*ui2*/ _2) \
+  ((int)__builtin_lsx_vpickve2gr_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui1.  */
+/* Data types in instruction templates:  DI, V2DI, UQI.  */
+#define __lsx_vpickve2gr_d(/*__m128i*/ _1, /*ui1*/ _2) \
+  ((long int)__builtin_lsx_vpickve2gr_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui4.  */
+/* Data types in instruction templates:  USI, V16QI, UQI.  */
+#define __lsx_vpickve2gr_bu(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((unsigned int)__builtin_lsx_vpickve2gr_bu ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui3.  */
+/* Data types in instruction templates:  USI, V8HI, UQI.  */
+#define __lsx_vpickve2gr_hu(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((unsigned int)__builtin_lsx_vpickve2gr_hu ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui2.  */
+/* Data types in instruction templates:  USI, V4SI, UQI.  */
+#define __lsx_vpickve2gr_wu(/*__m128i*/ _1, /*ui2*/ _2) \
+  ((unsigned int)__builtin_lsx_vpickve2gr_wu ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	rd, vj, ui1.  */
+/* Data types in instruction templates:  UDI, V2DI, UQI.  */
+#define __lsx_vpickve2gr_du(/*__m128i*/ _1, /*ui1*/ _2) \
+  ((unsigned long int)__builtin_lsx_vpickve2gr_du ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, rj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, SI, UQI.  */
+#define __lsx_vinsgr2vr_b(/*__m128i*/ _1, /*int*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vinsgr2vr_b ((v16i8)(_1), (int)(_2), (_3)))
+
+/* Assembly instruction format:	vd, rj, ui3.  */
+/* Data types in instruction templates:  V8HI, V8HI, SI, UQI.  */
+#define __lsx_vinsgr2vr_h(/*__m128i*/ _1, /*int*/ _2, /*ui3*/ _3) \
+  ((__m128i)__builtin_lsx_vinsgr2vr_h ((v8i16)(_1), (int)(_2), (_3)))
+
+/* Assembly instruction format:	vd, rj, ui2.  */
+/* Data types in instruction templates:  V4SI, V4SI, SI, UQI.  */
+#define __lsx_vinsgr2vr_w(/*__m128i*/ _1, /*int*/ _2, /*ui2*/ _3) \
+  ((__m128i)__builtin_lsx_vinsgr2vr_w ((v4i32)(_1), (int)(_2), (_3)))
+
+/* Assembly instruction format:	vd, rj, ui1.  */
+/* Data types in instruction templates:  V2DI, V2DI, DI, UQI.  */
+#define __lsx_vinsgr2vr_d(/*__m128i*/ _1, /*long int*/ _2, /*ui1*/ _3) \
+  ((__m128i)__builtin_lsx_vinsgr2vr_d ((v2i64)(_1), (long int)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfadd_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfadd_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfadd_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfadd_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfsub_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfsub_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfsub_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfsub_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmul_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfmul_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmul_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfmul_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfdiv_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfdiv_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfdiv_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfdiv_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcvt_h_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcvt_h_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfcvt_s_d (__m128d _1, __m128d _2)
+{
+  return (__m128)__builtin_lsx_vfcvt_s_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmin_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfmin_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmin_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfmin_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmina_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfmina_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmina_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfmina_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmax_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfmax_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmax_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfmax_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmaxa_s (__m128 _1, __m128 _2)
+{
+  return (__m128)__builtin_lsx_vfmaxa_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmaxa_d (__m128d _1, __m128d _2)
+{
+  return (__m128d)__builtin_lsx_vfmaxa_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfclass_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vfclass_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfclass_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vfclass_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfsqrt_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfsqrt_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfsqrt_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfsqrt_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrecip_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrecip_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrecip_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrecip_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrint_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrint_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrint_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrint_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrsqrt_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrsqrt_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrsqrt_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrsqrt_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vflogb_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vflogb_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vflogb_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vflogb_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfcvth_s_h (__m128i _1)
+{
+  return (__m128)__builtin_lsx_vfcvth_s_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfcvth_d_s (__m128 _1)
+{
+  return (__m128d)__builtin_lsx_vfcvth_d_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfcvtl_s_h (__m128i _1)
+{
+  return (__m128)__builtin_lsx_vfcvtl_s_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfcvtl_d_s (__m128 _1)
+{
+  return (__m128d)__builtin_lsx_vfcvtl_d_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftint_w_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftint_w_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftint_l_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftint_l_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftint_wu_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftint_wu_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftint_lu_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftint_lu_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrz_w_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrz_w_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrz_l_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftintrz_l_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrz_wu_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrz_wu_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrz_lu_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftintrz_lu_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vffint_s_w (__m128i _1)
+{
+  return (__m128)__builtin_lsx_vffint_s_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vffint_d_l (__m128i _1)
+{
+  return (__m128d)__builtin_lsx_vffint_d_l ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SF, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vffint_s_wu (__m128i _1)
+{
+  return (__m128)__builtin_lsx_vffint_s_wu ((v4u32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vffint_d_lu (__m128i _1)
+{
+  return (__m128d)__builtin_lsx_vffint_d_lu ((v2u64)_1);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vandn_v (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vandn_v ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vneg_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vneg_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vneg_h (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vneg_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vneg_w (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vneg_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vneg_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vneg_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmuh_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmuh_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V8HI, V16QI, UQI.  */
+#define __lsx_vsllwil_h_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsllwil_h_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V4SI, V8HI, UQI.  */
+#define __lsx_vsllwil_w_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsllwil_w_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V2DI, V4SI, UQI.  */
+#define __lsx_vsllwil_d_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsllwil_d_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  UV8HI, UV16QI, UQI.  */
+#define __lsx_vsllwil_hu_bu(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vsllwil_hu_bu ((v16u8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV4SI, UV8HI, UQI.  */
+#define __lsx_vsllwil_wu_hu(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vsllwil_wu_hu ((v8u16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV2DI, UV4SI, UQI.  */
+#define __lsx_vsllwil_du_wu(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vsllwil_du_wu ((v4u32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsran_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsran_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsran_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsran_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsran_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsran_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssran_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssran_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssran_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssran_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssran_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssran_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssran_bu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssran_bu_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssran_hu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssran_hu_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssran_wu_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssran_wu_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrarn_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrarn_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrarn_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrarn_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrarn_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrarn_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrarn_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrarn_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrarn_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrarn_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrarn_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrarn_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrarn_bu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrarn_bu_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrarn_hu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrarn_hu_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrarn_wu_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrarn_wu_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrln_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrln_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrln_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrln_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrln_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrln_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrln_bu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrln_bu_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrln_hu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrln_hu_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrln_wu_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrln_wu_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlrn_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlrn_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlrn_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlrn_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsrlrn_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsrlrn_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV16QI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrlrn_bu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrlrn_bu_h ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrlrn_hu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrlrn_hu_w ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrlrn_wu_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrlrn_wu_d ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, UQI.  */
+#define __lsx_vfrstpi_b(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vfrstpi_b ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, UQI.  */
+#define __lsx_vfrstpi_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vfrstpi_h ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfrstp_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vfrstp_b ((v16i8)_1, (v16i8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfrstp_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vfrstp_h ((v8i16)_1, (v8i16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vshuf4i_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vshuf4i_d ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vbsrl_v(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vbsrl_v ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vbsll_v(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vbsll_v ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vextrins_b(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vextrins_b ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vextrins_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vextrins_h ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vextrins_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vextrins_w ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vextrins_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vextrins_d ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmskltz_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vmskltz_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmskltz_h (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vmskltz_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmskltz_w (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vmskltz_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmskltz_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vmskltz_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsigncov_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsigncov_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsigncov_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsigncov_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsigncov_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsigncov_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsigncov_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsigncov_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmadd_s (__m128 _1, __m128 _2, __m128 _3)
+{
+  return (__m128)__builtin_lsx_vfmadd_s ((v4f32)_1, (v4f32)_2, (v4f32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmadd_d (__m128d _1, __m128d _2, __m128d _3)
+{
+  return (__m128d)__builtin_lsx_vfmadd_d ((v2f64)_1, (v2f64)_2, (v2f64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfmsub_s (__m128 _1, __m128 _2, __m128 _3)
+{
+  return (__m128)__builtin_lsx_vfmsub_s ((v4f32)_1, (v4f32)_2, (v4f32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfmsub_d (__m128d _1, __m128d _2, __m128d _3)
+{
+  return (__m128d)__builtin_lsx_vfmsub_d ((v2f64)_1, (v2f64)_2, (v2f64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfnmadd_s (__m128 _1, __m128 _2, __m128 _3)
+{
+  return (__m128)__builtin_lsx_vfnmadd_s ((v4f32)_1, (v4f32)_2, (v4f32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfnmadd_d (__m128d _1, __m128d _2, __m128d _3)
+{
+  return (__m128d)__builtin_lsx_vfnmadd_d ((v2f64)_1, (v2f64)_2, (v2f64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V4SF, V4SF, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfnmsub_s (__m128 _1, __m128 _2, __m128 _3)
+{
+  return (__m128)__builtin_lsx_vfnmsub_s ((v4f32)_1, (v4f32)_2, (v4f32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V2DF, V2DF, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfnmsub_d (__m128d _1, __m128d _2, __m128d _3)
+{
+  return (__m128d)__builtin_lsx_vfnmsub_d ((v2f64)_1, (v2f64)_2, (v2f64)_3);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrne_w_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrne_w_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrne_l_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftintrne_l_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrp_w_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrp_w_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrp_l_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftintrp_l_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrm_w_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrm_w_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrm_l_d (__m128d _1)
+{
+  return (__m128i)__builtin_lsx_vftintrm_l_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftint_w_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vftint_w_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SF, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vffint_s_l (__m128i _1, __m128i _2)
+{
+  return (__m128)__builtin_lsx_vffint_s_l ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrz_w_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vftintrz_w_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrp_w_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vftintrp_w_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrm_w_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vftintrm_w_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrne_w_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vftintrne_w_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintl_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintl_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftinth_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftinth_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vffinth_d_w (__m128i _1)
+{
+  return (__m128d)__builtin_lsx_vffinth_d_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DF, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vffintl_d_w (__m128i _1)
+{
+  return (__m128d)__builtin_lsx_vffintl_d_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrzl_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrzl_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrzh_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrzh_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrpl_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrpl_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrph_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrph_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrml_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrml_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrmh_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrmh_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrnel_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrnel_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vftintrneh_l_s (__m128 _1)
+{
+  return (__m128i)__builtin_lsx_vftintrneh_l_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrintrne_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrintrne_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrintrne_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrintrne_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrintrz_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrintrz_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrintrz_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrintrz_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrintrp_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrintrp_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrintrp_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrintrp_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128 __lsx_vfrintrm_s (__m128 _1)
+{
+  return (__m128)__builtin_lsx_vfrintrm_s ((v4f32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128d __lsx_vfrintrm_d (__m128d _1)
+{
+  return (__m128d)__builtin_lsx_vfrintrm_d ((v2f64)_1);
+}
+
+/* Assembly instruction format:	vd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V16QI, CVPOINTER, SI, UQI.  */
+#define __lsx_vstelm_b(/*__m128i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lsx_vstelm_b ((v16i8)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	vd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V8HI, CVPOINTER, SI, UQI.  */
+#define __lsx_vstelm_h(/*__m128i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lsx_vstelm_h ((v8i16)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	vd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V4SI, CVPOINTER, SI, UQI.  */
+#define __lsx_vstelm_w(/*__m128i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lsx_vstelm_w ((v4i32)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	vd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V2DI, CVPOINTER, SI, UQI.  */
+#define __lsx_vstelm_d(/*__m128i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lsx_vstelm_d ((v2i64)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_d_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_d_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_w_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_w_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_h_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_h_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_d_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_d_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_w_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_w_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_h_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_h_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_d_wu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_d_wu_w ((v4u32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_w_hu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_w_hu_h ((v8u16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_h_bu_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_h_bu_b ((v16u8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_d_wu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_d_wu_w ((v4u32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_w_hu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_w_hu_h ((v8u16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_h_bu_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_h_bu_b ((v16u8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_d_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_d_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_w_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_w_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_h_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_h_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_d_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_d_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_w_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_w_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_h_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_h_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_q_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_q_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_q_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_q_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwev_q_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwev_q_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsubwod_q_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsubwod_q_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwev_q_du_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwev_q_du_d ((v2u64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vaddwod_q_du_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vaddwod_q_du_d ((v2u64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_d_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_d_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_w_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_w_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_h_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_h_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_d_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_d_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_w_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_w_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_h_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_h_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_d_wu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_d_wu ((v4u32)_1, (v4u32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_w_hu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_w_hu ((v8u16)_1, (v8u16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_h_bu (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_h_bu ((v16u8)_1, (v16u8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_d_wu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_d_wu_w ((v4u32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_w_hu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_w_hu_h ((v8u16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_h_bu_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_h_bu_b ((v16u8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_d_wu_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_d_wu_w ((v4u32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, UV8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_w_hu_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_w_hu_h ((v8u16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, UV16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_h_bu_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_h_bu_b ((v16u8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_q_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_q_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_q_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_q_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwev_q_du_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwev_q_du_d ((v2u64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, UV2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmulwod_q_du_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vmulwod_q_du_d ((v2u64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhaddw_qu_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhaddw_qu_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_q_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_q_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vhsubw_qu_du (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vhsubw_qu_du ((v2u64)_1, (v2u64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_d_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_d_w ((v2i64)_1, (v4i32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_w_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_w_h ((v4i32)_1, (v8i16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_h_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_h_b ((v8i16)_1, (v16i8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_d_wu (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_d_wu ((v2u64)_1, (v4u32)_2, (v4u32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_w_hu (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_w_hu ((v4u32)_1, (v8u16)_2, (v8u16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_h_bu (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_h_bu ((v8u16)_1, (v16u8)_2, (v16u8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_d_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_d_w ((v2i64)_1, (v4i32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_w_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_w_h ((v4i32)_1, (v8i16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_h_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_h_b ((v8i16)_1, (v16i8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV4SI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_d_wu (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_d_wu ((v2u64)_1, (v4u32)_2, (v4u32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, UV8HI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_w_hu (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_w_hu ((v4u32)_1, (v8u16)_2, (v8u16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, UV16QI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_h_bu (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_h_bu ((v8u16)_1, (v16u8)_2, (v16u8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, UV4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_d_wu_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_d_wu_w ((v2i64)_1, (v4u32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, UV8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_w_hu_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_w_hu_h ((v4i32)_1, (v8u16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, UV16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_h_bu_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_h_bu_b ((v8i16)_1, (v16u8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, UV4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_d_wu_w (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_d_wu_w ((v2i64)_1, (v4u32)_2, (v4i32)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, UV8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_w_hu_h (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_w_hu_h ((v4i32)_1, (v8u16)_2, (v8i16)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, UV16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_h_bu_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_h_bu_b ((v8i16)_1, (v16u8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_q_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_q_d ((v2i64)_1, (v2i64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_q_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_q_d ((v2i64)_1, (v2i64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_q_du (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_q_du ((v2u64)_1, (v2u64)_2, (v2u64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_q_du (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_q_du ((v2u64)_1, (v2u64)_2, (v2u64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, UV2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwev_q_du_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwev_q_du_d ((v2i64)_1, (v2u64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, UV2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmaddwod_q_du_d (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vmaddwod_q_du_d ((v2i64)_1, (v2u64)_2, (v2i64)_3);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vrotr_b (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vrotr_b ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vrotr_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vrotr_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vrotr_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vrotr_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vrotr_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vrotr_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vadd_q (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vadd_q ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vsub_q (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vsub_q ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, rj, si12.  */
+/* Data types in instruction templates:  V16QI, CVPOINTER, SI.  */
+#define __lsx_vldrepl_b(/*void **/ _1, /*si12*/ _2) \
+  ((__m128i)__builtin_lsx_vldrepl_b ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	vd, rj, si11.  */
+/* Data types in instruction templates:  V8HI, CVPOINTER, SI.  */
+#define __lsx_vldrepl_h(/*void **/ _1, /*si11*/ _2) \
+  ((__m128i)__builtin_lsx_vldrepl_h ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	vd, rj, si10.  */
+/* Data types in instruction templates:  V4SI, CVPOINTER, SI.  */
+#define __lsx_vldrepl_w(/*void **/ _1, /*si10*/ _2) \
+  ((__m128i)__builtin_lsx_vldrepl_w ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	vd, rj, si9.  */
+/* Data types in instruction templates:  V2DI, CVPOINTER, SI.  */
+#define __lsx_vldrepl_d(/*void **/ _1, /*si9*/ _2) \
+  ((__m128i)__builtin_lsx_vldrepl_d ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmskgez_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vmskgez_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vmsknz_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vmsknz_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V8HI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_h_b (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_h_b ((v16i8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V4SI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_w_h (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_w_h ((v8i16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_d_w (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_d_w ((v4i32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_q_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_q_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV8HI, UV16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_hu_bu (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_hu_bu ((v16u8)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV4SI, UV8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_wu_hu (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_wu_hu ((v8u16)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV2DI, UV4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_du_wu (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_du_wu ((v4u32)_1);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vexth_qu_du (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vexth_qu_du ((v2u64)_1);
+}
+
+/* Assembly instruction format:	vd, vj, ui3.  */
+/* Data types in instruction templates:  V16QI, V16QI, UQI.  */
+#define __lsx_vrotri_b(/*__m128i*/ _1, /*ui3*/ _2) \
+  ((__m128i)__builtin_lsx_vrotri_b ((v16i8)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V8HI, V8HI, UQI.  */
+#define __lsx_vrotri_h(/*__m128i*/ _1, /*ui4*/ _2) \
+  ((__m128i)__builtin_lsx_vrotri_h ((v8i16)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V4SI, V4SI, UQI.  */
+#define __lsx_vrotri_w(/*__m128i*/ _1, /*ui5*/ _2) \
+  ((__m128i)__builtin_lsx_vrotri_w ((v4i32)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V2DI, V2DI, UQI.  */
+#define __lsx_vrotri_d(/*__m128i*/ _1, /*ui6*/ _2) \
+  ((__m128i)__builtin_lsx_vrotri_d ((v2i64)(_1), (_2)))
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vextl_q_d (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vextl_q_d ((v2i64)_1);
+}
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vsrlni_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlni_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vsrlni_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlni_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vsrlni_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlni_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vsrlni_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlni_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vsrlrni_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlrni_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vsrlrni_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlrni_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vsrlrni_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlrni_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vsrlrni_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vsrlrni_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vssrlni_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vssrlni_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vssrlni_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vssrlni_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, V16QI, USI.  */
+#define __lsx_vssrlni_bu_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_bu_h ((v16u8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, V8HI, USI.  */
+#define __lsx_vssrlni_hu_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_hu_w ((v8u16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, V4SI, USI.  */
+#define __lsx_vssrlni_wu_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_wu_d ((v4u32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, V2DI, USI.  */
+#define __lsx_vssrlni_du_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlni_du_q ((v2u64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vssrlrni_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vssrlrni_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vssrlrni_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vssrlrni_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, V16QI, USI.  */
+#define __lsx_vssrlrni_bu_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_bu_h ((v16u8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, V8HI, USI.  */
+#define __lsx_vssrlrni_hu_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_hu_w ((v8u16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, V4SI, USI.  */
+#define __lsx_vssrlrni_wu_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_wu_d ((v4u32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, V2DI, USI.  */
+#define __lsx_vssrlrni_du_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrlrni_du_q ((v2u64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vsrani_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vsrani_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vsrani_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vsrani_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vsrani_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vsrani_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vsrani_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vsrani_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vsrarni_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vsrarni_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vsrarni_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vsrarni_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vsrarni_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vsrarni_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vsrarni_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vsrarni_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vssrani_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vssrani_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vssrani_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vssrani_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, V16QI, USI.  */
+#define __lsx_vssrani_bu_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_bu_h ((v16u8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, V8HI, USI.  */
+#define __lsx_vssrani_hu_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_hu_w ((v8u16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, V4SI, USI.  */
+#define __lsx_vssrani_wu_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_wu_d ((v4u32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, V2DI, USI.  */
+#define __lsx_vssrani_du_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrani_du_q ((v2u64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, USI.  */
+#define __lsx_vssrarni_b_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_b_h ((v16i8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  V8HI, V8HI, V8HI, USI.  */
+#define __lsx_vssrarni_h_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_h_w ((v8i16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vssrarni_w_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_w_d ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  V2DI, V2DI, V2DI, USI.  */
+#define __lsx_vssrarni_d_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_d_q ((v2i64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui4.  */
+/* Data types in instruction templates:  UV16QI, UV16QI, V16QI, USI.  */
+#define __lsx_vssrarni_bu_h(/*__m128i*/ _1, /*__m128i*/ _2, /*ui4*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_bu_h ((v16u8)(_1), (v16i8)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui5.  */
+/* Data types in instruction templates:  UV8HI, UV8HI, V8HI, USI.  */
+#define __lsx_vssrarni_hu_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui5*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_hu_w ((v8u16)(_1), (v8i16)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui6.  */
+/* Data types in instruction templates:  UV4SI, UV4SI, V4SI, USI.  */
+#define __lsx_vssrarni_wu_d(/*__m128i*/ _1, /*__m128i*/ _2, /*ui6*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_wu_d ((v4u32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui7.  */
+/* Data types in instruction templates:  UV2DI, UV2DI, V2DI, USI.  */
+#define __lsx_vssrarni_du_q(/*__m128i*/ _1, /*__m128i*/ _2, /*ui7*/ _3) \
+  ((__m128i)__builtin_lsx_vssrarni_du_q ((v2u64)(_1), (v2i64)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, ui8.  */
+/* Data types in instruction templates:  V4SI, V4SI, V4SI, USI.  */
+#define __lsx_vpermi_w(/*__m128i*/ _1, /*__m128i*/ _2, /*ui8*/ _3) \
+  ((__m128i)__builtin_lsx_vpermi_w ((v4i32)(_1), (v4i32)(_2), (_3)))
+
+/* Assembly instruction format:	vd, rj, si12.  */
+/* Data types in instruction templates:  V16QI, CVPOINTER, SI.  */
+#define __lsx_vld(/*void **/ _1, /*si12*/ _2) \
+  ((__m128i)__builtin_lsx_vld ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	vd, rj, si12.  */
+/* Data types in instruction templates:  VOID, V16QI, CVPOINTER, SI.  */
+#define __lsx_vst(/*__m128i*/ _1, /*void **/ _2, /*si12*/ _3) \
+  ((void)__builtin_lsx_vst ((v16i8)(_1), (void *)(_2), (_3)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrlrn_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrlrn_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrlrn_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrlrn_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrlrn_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrlrn_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V8HI, V8HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrln_b_h (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrln_b_h ((v8i16)_1, (v8i16)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V8HI, V4SI, V4SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrln_h_w (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrln_h_w ((v4i32)_1, (v4i32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V2DI, V2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vssrln_w_d (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vssrln_w_d ((v2i64)_1, (v2i64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vorn_v (__m128i _1, __m128i _2)
+{
+  return (__m128i)__builtin_lsx_vorn_v ((v16i8)_1, (v16i8)_2);
+}
+
+/* Assembly instruction format:	vd, i13.  */
+/* Data types in instruction templates:  V2DI, HI.  */
+#define __lsx_vldi(/*i13*/ _1) \
+  ((__m128i)__builtin_lsx_vldi ((_1)))
+
+/* Assembly instruction format:	vd, vj, vk, va.  */
+/* Data types in instruction templates:  V16QI, V16QI, V16QI, V16QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vshuf_b (__m128i _1, __m128i _2, __m128i _3)
+{
+  return (__m128i)__builtin_lsx_vshuf_b ((v16i8)_1, (v16i8)_2, (v16i8)_3);
+}
+
+/* Assembly instruction format:	vd, rj, rk.  */
+/* Data types in instruction templates:  V16QI, CVPOINTER, DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vldx (void * _1, long int _2)
+{
+  return (__m128i)__builtin_lsx_vldx ((void *)_1, (long int)_2);
+}
+
+/* Assembly instruction format:	vd, rj, rk.  */
+/* Data types in instruction templates:  VOID, V16QI, CVPOINTER, DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+void __lsx_vstx (__m128i _1, void * _2, long int _3)
+{
+  return (void)__builtin_lsx_vstx ((v16i8)_1, (void *)_2, (long int)_3);
+}
+
+/* Assembly instruction format:	vd, vj.  */
+/* Data types in instruction templates:  UV2DI, UV2DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vextl_qu_du (__m128i _1)
+{
+  return (__m128i)__builtin_lsx_vextl_qu_du ((v2u64)_1);
+}
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV16QI.  */
+#define __lsx_bnz_b(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bnz_b ((v16u8)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV2DI.  */
+#define __lsx_bnz_d(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bnz_d ((v2u64)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV8HI.  */
+#define __lsx_bnz_h(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bnz_h ((v8u16)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV16QI.  */
+#define __lsx_bnz_v(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bnz_v ((v16u8)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV4SI.  */
+#define __lsx_bnz_w(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bnz_w ((v4u32)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV16QI.  */
+#define __lsx_bz_b(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bz_b ((v16u8)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV2DI.  */
+#define __lsx_bz_d(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bz_d ((v2u64)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV8HI.  */
+#define __lsx_bz_h(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bz_h ((v8u16)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV16QI.  */
+#define __lsx_bz_v(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bz_v ((v16u8)(_1)))
+
+/* Assembly instruction format:	cd, vj.  */
+/* Data types in instruction templates:  SI, UV4SI.  */
+#define __lsx_bz_w(/*__m128i*/ _1) \
+  ((int)__builtin_lsx_bz_w ((v4u32)(_1)))
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_caf_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_caf_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_caf_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_caf_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_ceq_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_ceq_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_ceq_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_ceq_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cle_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cle_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cle_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cle_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_clt_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_clt_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_clt_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_clt_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cne_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cne_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cne_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cne_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cor_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cor_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cor_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cor_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cueq_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cueq_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cueq_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cueq_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cule_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cule_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cule_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cule_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cult_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cult_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cult_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cult_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cun_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cun_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cune_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cune_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cune_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cune_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_cun_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_cun_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_saf_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_saf_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_saf_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_saf_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_seq_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_seq_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_seq_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_seq_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sle_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sle_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sle_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sle_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_slt_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_slt_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_slt_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_slt_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sne_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sne_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sne_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sne_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sor_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sor_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sor_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sor_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sueq_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sueq_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sueq_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sueq_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sule_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sule_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sule_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sule_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sult_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sult_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sult_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sult_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sun_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sun_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V2DI, V2DF, V2DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sune_d (__m128d _1, __m128d _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sune_d ((v2f64)_1, (v2f64)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sune_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sune_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, vj, vk.  */
+/* Data types in instruction templates:  V4SI, V4SF, V4SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m128i __lsx_vfcmp_sun_s (__m128 _1, __m128 _2)
+{
+  return (__m128i)__builtin_lsx_vfcmp_sun_s ((v4f32)_1, (v4f32)_2);
+}
+
+/* Assembly instruction format:	vd, si10.  */
+/* Data types in instruction templates:  V16QI, HI.  */
+#define __lsx_vrepli_b(/*si10*/ _1) \
+  ((__m128i)__builtin_lsx_vrepli_b ((_1)))
+
+/* Assembly instruction format:	vd, si10.  */
+/* Data types in instruction templates:  V2DI, HI.  */
+#define __lsx_vrepli_d(/*si10*/ _1) \
+  ((__m128i)__builtin_lsx_vrepli_d ((_1)))
+
+/* Assembly instruction format:	vd, si10.  */
+/* Data types in instruction templates:  V8HI, HI.  */
+#define __lsx_vrepli_h(/*si10*/ _1) \
+  ((__m128i)__builtin_lsx_vrepli_h ((_1)))
+
+/* Assembly instruction format:	vd, si10.  */
+/* Data types in instruction templates:  V4SI, HI.  */
+#define __lsx_vrepli_w(/*si10*/ _1) \
+  ((__m128i)__builtin_lsx_vrepli_w ((_1)))
+
+#endif /* defined(__loongarch_sx) */
+#endif /* _GCC_LOONGSON_SXINTRIN_H */
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 4/8] LoongArch: Added Loongson ASX vector directive compilation framework.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
                   ` (2 preceding siblings ...)
  2023-07-18 11:06 ` [PATCH v2 3/8] LoongArch: Added Loongson SX directive builtin function support Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 5/8] LoongArch: Added Loongson ASX base instruction support Chenghui Pan
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/ChangeLog:

	* config/loongarch/genopts/loongarch-strings: Added compilation framework.
	* config/loongarch/genopts/loongarch.opt.in: Ditto.
	* config/loongarch/loongarch-c.cc (loongarch_cpu_cpp_builtins): Ditto.
	* config/loongarch/loongarch-def.c: Ditto.
	* config/loongarch/loongarch-def.h (N_ISA_EXT_TYPES): Ditto.
	(ISA_EXT_SIMD_LASX): Ditto.
	(N_SWITCH_TYPES): Ditto.
	(SW_LASX): Ditto.
	* config/loongarch/loongarch-driver.cc (driver_get_normalized_m_opts): Ditto.
	* config/loongarch/loongarch-driver.h (driver_get_normalized_m_opts): Ditto.
	* config/loongarch/loongarch-opts.cc (isa_str): Ditto.
	* config/loongarch/loongarch-opts.h (ISA_HAS_LSX): Ditto.
	(ISA_HAS_LASX): Ditto.
	* config/loongarch/loongarch-str.h (OPTSTR_LASX): Ditto.
	* config/loongarch/loongarch.opt: Ditto.
---
 gcc/config/loongarch/genopts/loongarch-strings |  1 +
 gcc/config/loongarch/genopts/loongarch.opt.in  |  4 ++++
 gcc/config/loongarch/loongarch-c.cc            | 11 +++++++++++
 gcc/config/loongarch/loongarch-def.c           |  4 +++-
 gcc/config/loongarch/loongarch-def.h           |  6 ++++--
 gcc/config/loongarch/loongarch-driver.cc       |  2 +-
 gcc/config/loongarch/loongarch-driver.h        |  1 +
 gcc/config/loongarch/loongarch-opts.cc         |  9 ++++++++-
 gcc/config/loongarch/loongarch-opts.h          |  4 +++-
 gcc/config/loongarch/loongarch-str.h           |  1 +
 gcc/config/loongarch/loongarch.opt             |  4 ++++
 11 files changed, 41 insertions(+), 6 deletions(-)

diff --git a/gcc/config/loongarch/genopts/loongarch-strings b/gcc/config/loongarch/genopts/loongarch-strings
index 24a5025061f..35d08f5967d 100644
--- a/gcc/config/loongarch/genopts/loongarch-strings
+++ b/gcc/config/loongarch/genopts/loongarch-strings
@@ -42,6 +42,7 @@ OPTSTR_DOUBLE_FLOAT   double-float
 
 # SIMD extensions
 OPTSTR_LSX	lsx
+OPTSTR_LASX	lasx
 
 # -mabi=
 OPTSTR_ABI_BASE	      abi
diff --git a/gcc/config/loongarch/genopts/loongarch.opt.in b/gcc/config/loongarch/genopts/loongarch.opt.in
index 338d77a7e40..afde23c9661 100644
--- a/gcc/config/loongarch/genopts/loongarch.opt.in
+++ b/gcc/config/loongarch/genopts/loongarch.opt.in
@@ -80,6 +80,10 @@ m@@OPTSTR_LSX@@
 Target RejectNegative Var(la_opt_switches) Mask(LSX) Negative(m@@OPTSTR_LSX@@)
 Enable LoongArch SIMD Extension (LSX).
 
+m@@OPTSTR_LASX@@
+Target RejectNegative Var(la_opt_switches) Mask(LASX) Negative(m@@OPTSTR_LASX@@)
+Enable LoongArch Advanced SIMD Extension (LASX).
+
 ;; Base target models (implies ISA & tune parameters)
 Enum
 Name(cpu_type) Type(int)
diff --git a/gcc/config/loongarch/loongarch-c.cc b/gcc/config/loongarch/loongarch-c.cc
index b065921adc3..2747fb9e472 100644
--- a/gcc/config/loongarch/loongarch-c.cc
+++ b/gcc/config/loongarch/loongarch-c.cc
@@ -104,8 +104,19 @@ loongarch_cpu_cpp_builtins (cpp_reader *pfile)
       builtin_define ("__loongarch_simd");
       builtin_define ("__loongarch_sx");
       builtin_define ("__loongarch_sx_width=128");
+
+      if (!ISA_HAS_LASX)
+	builtin_define ("__loongarch_simd_width=128");
     }
 
+  if (ISA_HAS_LASX)
+    {
+      builtin_define ("__loongarch_asx");
+      builtin_define ("__loongarch_asx_width=256");
+      builtin_define ("__loongarch_simd_width=256");
+    }
+
+
   /* Native Data Sizes.  */
   builtin_define_with_int_value ("_LOONGARCH_SZINT", INT_TYPE_SIZE);
   builtin_define_with_int_value ("_LOONGARCH_SZLONG", LONG_TYPE_SIZE);
diff --git a/gcc/config/loongarch/loongarch-def.c b/gcc/config/loongarch/loongarch-def.c
index 28e24c62249..bff92c86532 100644
--- a/gcc/config/loongarch/loongarch-def.c
+++ b/gcc/config/loongarch/loongarch-def.c
@@ -54,7 +54,7 @@ loongarch_cpu_default_isa[N_ARCH_TYPES] = {
   [CPU_LA464] = {
       .base = ISA_BASE_LA64V100,
       .fpu = ISA_EXT_FPU64,
-      .simd = ISA_EXT_SIMD_LSX,
+      .simd = ISA_EXT_SIMD_LASX,
   },
 };
 
@@ -150,6 +150,7 @@ loongarch_isa_ext_strings[N_ISA_EXT_TYPES] = {
   [ISA_EXT_FPU32] = STR_ISA_EXT_FPU32,
   [ISA_EXT_NOFPU] = STR_ISA_EXT_NOFPU,
   [ISA_EXT_SIMD_LSX] = OPTSTR_LSX,
+  [ISA_EXT_SIMD_LASX] = OPTSTR_LASX,
 };
 
 const char*
@@ -180,6 +181,7 @@ loongarch_switch_strings[] = {
   [SW_SINGLE_FLOAT]	  = OPTSTR_SINGLE_FLOAT,
   [SW_DOUBLE_FLOAT]	  = OPTSTR_DOUBLE_FLOAT,
   [SW_LSX]		  = OPTSTR_LSX,
+  [SW_LASX]		  = OPTSTR_LASX,
 };
 
 
diff --git a/gcc/config/loongarch/loongarch-def.h b/gcc/config/loongarch/loongarch-def.h
index f34cffcfb9b..0bbcdb03d22 100644
--- a/gcc/config/loongarch/loongarch-def.h
+++ b/gcc/config/loongarch/loongarch-def.h
@@ -64,7 +64,8 @@ extern const char* loongarch_isa_ext_strings[];
 #define ISA_EXT_FPU64	      2
 #define N_ISA_EXT_FPU_TYPES   3
 #define ISA_EXT_SIMD_LSX      3
-#define N_ISA_EXT_TYPES	      4
+#define ISA_EXT_SIMD_LASX     4
+#define N_ISA_EXT_TYPES	      5
 
 /* enum abi_base */
 extern const char* loongarch_abi_base_strings[];
@@ -99,7 +100,8 @@ extern const char* loongarch_switch_strings[];
 #define SW_SINGLE_FLOAT	      1
 #define SW_DOUBLE_FLOAT	      2
 #define SW_LSX		      3
-#define N_SWITCH_TYPES	      4
+#define SW_LASX		      4
+#define N_SWITCH_TYPES	      5
 
 /* The common default value for variables whose assignments
    are triggered by command-line options.  */
diff --git a/gcc/config/loongarch/loongarch-driver.cc b/gcc/config/loongarch/loongarch-driver.cc
index aa5011bd86a..3b9605de35f 100644
--- a/gcc/config/loongarch/loongarch-driver.cc
+++ b/gcc/config/loongarch/loongarch-driver.cc
@@ -181,7 +181,7 @@ driver_get_normalized_m_opts (int argc, const char **argv)
 
   if (la_target.isa.simd)
     {
-      APPEND_LTR (" %<m" OPTSTR_LSX " -m");
+      APPEND_LTR (" %<m" OPTSTR_LSX " %<m" OPTSTR_LASX " -m");
       APPEND_VAL (loongarch_isa_ext_strings[la_target.isa.simd]);
     }
 
diff --git a/gcc/config/loongarch/loongarch-driver.h b/gcc/config/loongarch/loongarch-driver.h
index db663818b7c..0c6b4157261 100644
--- a/gcc/config/loongarch/loongarch-driver.h
+++ b/gcc/config/loongarch/loongarch-driver.h
@@ -52,6 +52,7 @@ driver_get_normalized_m_opts (int argc, const char **argv);
   LA_SET_FLAG_SPEC (SINGLE_FLOAT)			      \
   LA_SET_FLAG_SPEC (DOUBLE_FLOAT)			      \
   LA_SET_FLAG_SPEC (LSX)				      \
+  LA_SET_FLAG_SPEC (LASX)				      \
   " %:get_normalized_m_opts()"
 
 #define DRIVER_SELF_SPECS \
diff --git a/gcc/config/loongarch/loongarch-opts.cc b/gcc/config/loongarch/loongarch-opts.cc
index 9753cf1290b..5986a2dd456 100644
--- a/gcc/config/loongarch/loongarch-opts.cc
+++ b/gcc/config/loongarch/loongarch-opts.cc
@@ -84,6 +84,7 @@ const int loongarch_switch_mask[N_SWITCH_TYPES] = {
   /* SW_SINGLE_FLOAT */  M(FORCE_F32),
   /* SW_DOUBLE_FLOAT */  M(FORCE_F64),
   /* SW_LSX */		 M(LSX),
+  /* SW_LASX */		 M(LASX),
 };
 #undef M
 
@@ -254,8 +255,9 @@ config_target_isa:
      t.isa.fpu : DEFAULT_ISA_EXT_FPU);
 
   /* LoongArch SIMD extensions.  */
+  /* Note: LASX implies LSX, so we put "on (LASX)" first.  */
   int simd_switch;
-  if (on (LSX))
+  if (on (LASX) || on (LSX))
     {
       constrained.simd = 1;
       switch (on_switch)
@@ -264,6 +266,10 @@ config_target_isa:
 	    t.isa.simd = ISA_EXT_SIMD_LSX;
 	    break;
 
+	  case SW_LASX:
+	    t.isa.simd = ISA_EXT_SIMD_LASX;
+	    break;
+
 	  default:
 	    gcc_unreachable ();
 	}
@@ -603,6 +609,7 @@ isa_str (const struct loongarch_isa *isa, char separator)
   switch (isa->simd)
     {
       case ISA_EXT_SIMD_LSX:
+      case ISA_EXT_SIMD_LASX:
 	APPEND1 (separator);
 	APPEND_STRING (loongarch_isa_ext_strings[isa->simd]);
 	break;
diff --git a/gcc/config/loongarch/loongarch-opts.h b/gcc/config/loongarch/loongarch-opts.h
index d067c05dfc9..59a383ec5ca 100644
--- a/gcc/config/loongarch/loongarch-opts.h
+++ b/gcc/config/loongarch/loongarch-opts.h
@@ -66,7 +66,9 @@ loongarch_config_target (struct loongarch_target *target,
 				   || la_target.abi.base == ABI_BASE_LP64F \
 				   || la_target.abi.base == ABI_BASE_LP64S)
 
-#define ISA_HAS_LSX		  (la_target.isa.simd == ISA_EXT_SIMD_LSX)
+#define ISA_HAS_LSX		  (la_target.isa.simd == ISA_EXT_SIMD_LSX \
+				   || la_target.isa.simd == ISA_EXT_SIMD_LASX)
+#define ISA_HAS_LASX		  (la_target.isa.simd == ISA_EXT_SIMD_LASX)
 #define TARGET_ARCH_NATIVE	  (la_target.cpu_arch == CPU_NATIVE)
 #define LARCH_ACTUAL_ARCH	  (TARGET_ARCH_NATIVE \
 				   ? (la_target.cpu_native < N_ARCH_TYPES \
diff --git a/gcc/config/loongarch/loongarch-str.h b/gcc/config/loongarch/loongarch-str.h
index 6fa1b1571c5..951f35a3c24 100644
--- a/gcc/config/loongarch/loongarch-str.h
+++ b/gcc/config/loongarch/loongarch-str.h
@@ -43,6 +43,7 @@ along with GCC; see the file COPYING3.  If not see
 #define OPTSTR_DOUBLE_FLOAT "double-float"
 
 #define OPTSTR_LSX "lsx"
+#define OPTSTR_LASX "lasx"
 
 #define OPTSTR_ABI_BASE "abi"
 #define STR_ABI_BASE_LP64D "lp64d"
diff --git a/gcc/config/loongarch/loongarch.opt b/gcc/config/loongarch/loongarch.opt
index 5c7e6d37220..611629b4203 100644
--- a/gcc/config/loongarch/loongarch.opt
+++ b/gcc/config/loongarch/loongarch.opt
@@ -87,6 +87,10 @@ mlsx
 Target RejectNegative Var(la_opt_switches) Mask(LSX) Negative(mlsx)
 Enable LoongArch SIMD Extension (LSX).
 
+mlasx
+Target RejectNegative Var(la_opt_switches) Mask(LASX) Negative(mlasx)
+Enable LoongArch Advanced SIMD Extension (LASX).
+
 ;; Base target models (implies ISA & tune parameters)
 Enum
 Name(cpu_type) Type(int)
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 5/8] LoongArch: Added Loongson ASX base instruction support.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
                   ` (3 preceding siblings ...)
  2023-07-18 11:06 ` [PATCH v2 4/8] LoongArch: Added Loongson ASX vector directive compilation framework Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 6/8] LoongArch: Added Loongson ASX directive builtin function support Chenghui Pan
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/ChangeLog:

	* config/loongarch/loongarch-modes.def
	(VECTOR_MODES): Added Loongson ASX instruction support.
	* config/loongarch/loongarch-protos.h (loongarch_split_256bit_move): Ditto.
	(loongarch_split_256bit_move_p): Ditto.
	(loongarch_expand_vector_group_init): Ditto.
	(loongarch_expand_vec_perm_1): Ditto.
	* config/loongarch/loongarch.cc (loongarch_symbol_insns): Ditto.
	(loongarch_valid_offset_p): Ditto.
	(loongarch_valid_index_p): Ditto.
	(loongarch_address_insns): Ditto.
	(loongarch_const_insns): Ditto.
	(loongarch_legitimize_move): Ditto.
	(loongarch_builtin_vectorization_cost): Ditto.
	(loongarch_split_move_p): Ditto.
	(loongarch_split_move): Ditto.
	(loongarch_output_move_index): Ditto.
	(loongarch_output_move_index_float): Ditto.
	(loongarch_split_256bit_move_p): Ditto.
	(loongarch_split_256bit_move): Ditto.
	(loongarch_output_move): Ditto.
	(loongarch_print_operand_reloc): Ditto.
	(loongarch_print_operand): Ditto.
	(loongarch_hard_regno_mode_ok_uncached): Ditto.
	(loongarch_hard_regno_nregs): Ditto.
	(loongarch_class_max_nregs): Ditto.
	(loongarch_can_change_mode_class): Ditto.
	(loongarch_mode_ok_for_mov_fmt_p): Ditto.
	(loongarch_vector_mode_supported_p): Ditto.
	(loongarch_preferred_simd_mode): Ditto.
	(loongarch_autovectorize_vector_modes): Ditto.
	(loongarch_lsx_output_division): Ditto.
	(loongarch_expand_lsx_shuffle): Ditto.
	(loongarch_expand_vec_perm): Ditto.
	(loongarch_expand_vec_perm_interleave): Ditto.
	(loongarch_try_expand_lsx_vshuf_const): Ditto.
	(loongarch_expand_vec_perm_even_odd_1): Ditto.
	(loongarch_expand_vec_perm_even_odd): Ditto.
	(loongarch_expand_vec_perm_1): Ditto.
	(loongarch_expand_vec_perm_const_1): Ditto.
	(loongarch_is_quad_duplicate): Ditto.
	(loongarch_is_double_duplicate): Ditto.
	(loongarch_is_odd_extraction): Ditto.
	(loongarch_is_even_extraction): Ditto.
	(loongarch_is_extraction_permutation): Ditto.
	(loongarch_is_center_extraction): Ditto.
	(loongarch_is_reversing_permutation): Ditto.
	(loongarch_is_di_misalign_extract): Ditto.
	(loongarch_is_si_misalign_extract): Ditto.
	(loongarch_is_lasx_lowpart_interleave): Ditto.
	(loongarch_is_lasx_lowpart_interleave_2): Ditto.
	(COMPARE_SELECTOR): Ditto.
	(loongarch_is_lasx_lowpart_extract): Ditto.
	(loongarch_is_lasx_highpart_interleave): Ditto.
	(loongarch_is_lasx_highpart_interleave_2): Ditto.
	(loongarch_is_elem_duplicate): Ditto.
	(loongarch_is_op_reverse_perm): Ditto.
	(loongarch_is_single_op_perm): Ditto.
	(loongarch_is_divisible_perm): Ditto.
	(loongarch_is_triple_stride_extract): Ditto.
	(loongarch_expand_vec_perm_const_2): Ditto.
	(loongarch_sched_reassociation_width): Ditto.
	(loongarch_expand_vector_extract): Ditto.
	(emit_reduc_half): Ditto.
	(loongarch_expand_vec_unpack): Ditto.
	(loongarch_expand_vector_group_init): Ditto.
	(loongarch_expand_vector_init): Ditto.
	(loongarch_expand_lsx_cmp): Ditto.
	(loongarch_builtin_support_vector_misalignment): Ditto.
	* config/loongarch/loongarch.h (UNITS_PER_LASX_REG): Ditto.
	(BITS_PER_LASX_REG): Ditto.
	(STRUCTURE_SIZE_BOUNDARY): Ditto.
	(LASX_REG_FIRST): Ditto.
	(LASX_REG_LAST): Ditto.
	(LASX_REG_NUM): Ditto.
	(LASX_REG_P): Ditto.
	(LASX_REG_RTX_P): Ditto.
	(LASX_SUPPORTED_MODE_P): Ditto.
	* config/loongarch/loongarch.md: Ditto.
	* config/loongarch/lasx.md: New file.
---
 gcc/config/loongarch/lasx.md             | 5120 ++++++++++++++++++++++
 gcc/config/loongarch/loongarch-modes.def |    1 +
 gcc/config/loongarch/loongarch-protos.h  |    4 +
 gcc/config/loongarch/loongarch.cc        | 2512 ++++++++++-
 gcc/config/loongarch/loongarch.h         |   60 +-
 gcc/config/loongarch/loongarch.md        |   20 +-
 6 files changed, 7587 insertions(+), 130 deletions(-)
 create mode 100644 gcc/config/loongarch/lasx.md

diff --git a/gcc/config/loongarch/lasx.md b/gcc/config/loongarch/lasx.md
new file mode 100644
index 00000000000..69c0688a3db
--- /dev/null
+++ b/gcc/config/loongarch/lasx.md
@@ -0,0 +1,5120 @@
+;; Machine Description for LARCH Loongson ASX ASE
+;;
+;; Copyright (C) 2018 Free Software Foundation, Inc.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+;;
+
+(define_c_enum "unspec" [
+  UNSPEC_LASX_XVABSD_U
+  UNSPEC_LASX_XVAVG_S
+  UNSPEC_LASX_XVAVG_U
+  UNSPEC_LASX_XVAVGR_S
+  UNSPEC_LASX_XVAVGR_U
+  UNSPEC_LASX_XVBITCLR
+  UNSPEC_LASX_XVBITCLRI
+  UNSPEC_LASX_XVBITREV
+  UNSPEC_LASX_XVBITREVI
+  UNSPEC_LASX_XVBITSET
+  UNSPEC_LASX_XVBITSETI
+  UNSPEC_LASX_XVFCMP_CAF
+  UNSPEC_LASX_XVFCLASS
+  UNSPEC_LASX_XVFCMP_CUNE
+  UNSPEC_LASX_XVFCVT
+  UNSPEC_LASX_XVFCVTH
+  UNSPEC_LASX_XVFCVTL
+  UNSPEC_LASX_XVFLOGB
+  UNSPEC_LASX_XVFRECIP
+  UNSPEC_LASX_XVFRINT
+  UNSPEC_LASX_XVFRSQRT
+  UNSPEC_LASX_XVFCMP_SAF
+  UNSPEC_LASX_XVFCMP_SEQ
+  UNSPEC_LASX_XVFCMP_SLE
+  UNSPEC_LASX_XVFCMP_SLT
+  UNSPEC_LASX_XVFCMP_SNE
+  UNSPEC_LASX_XVFCMP_SOR
+  UNSPEC_LASX_XVFCMP_SUEQ
+  UNSPEC_LASX_XVFCMP_SULE
+  UNSPEC_LASX_XVFCMP_SULT
+  UNSPEC_LASX_XVFCMP_SUN
+  UNSPEC_LASX_XVFCMP_SUNE
+  UNSPEC_LASX_XVFTINT_S
+  UNSPEC_LASX_XVFTINT_U
+  UNSPEC_LASX_XVCLO
+  UNSPEC_LASX_XVSAT_S
+  UNSPEC_LASX_XVSAT_U
+  UNSPEC_LASX_XVREPLVE0
+  UNSPEC_LASX_XVREPL128VEI
+  UNSPEC_LASX_XVSRAR
+  UNSPEC_LASX_XVSRARI
+  UNSPEC_LASX_XVSRLR
+  UNSPEC_LASX_XVSRLRI
+  UNSPEC_LASX_XVSHUF
+  UNSPEC_LASX_XVSHUF_B
+  UNSPEC_LASX_BRANCH
+  UNSPEC_LASX_BRANCH_V
+
+  UNSPEC_LASX_XVMUH_S
+  UNSPEC_LASX_XVMUH_U
+  UNSPEC_LASX_MXVEXTW_U
+  UNSPEC_LASX_XVSLLWIL_S
+  UNSPEC_LASX_XVSLLWIL_U
+  UNSPEC_LASX_XVSRAN
+  UNSPEC_LASX_XVSSRAN_S
+  UNSPEC_LASX_XVSSRAN_U
+  UNSPEC_LASX_XVSRARN
+  UNSPEC_LASX_XVSSRARN_S
+  UNSPEC_LASX_XVSSRARN_U
+  UNSPEC_LASX_XVSRLN
+  UNSPEC_LASX_XVSSRLN_U
+  UNSPEC_LASX_XVSRLRN
+  UNSPEC_LASX_XVSSRLRN_U
+  UNSPEC_LASX_XVFRSTPI
+  UNSPEC_LASX_XVFRSTP
+  UNSPEC_LASX_XVSHUF4I
+  UNSPEC_LASX_XVBSRL_V
+  UNSPEC_LASX_XVBSLL_V
+  UNSPEC_LASX_XVEXTRINS
+  UNSPEC_LASX_XVMSKLTZ
+  UNSPEC_LASX_XVSIGNCOV
+  UNSPEC_LASX_XVFTINTRNE_W_S
+  UNSPEC_LASX_XVFTINTRNE_L_D
+  UNSPEC_LASX_XVFTINTRP_W_S
+  UNSPEC_LASX_XVFTINTRP_L_D
+  UNSPEC_LASX_XVFTINTRM_W_S
+  UNSPEC_LASX_XVFTINTRM_L_D
+  UNSPEC_LASX_XVFTINT_W_D
+  UNSPEC_LASX_XVFFINT_S_L
+  UNSPEC_LASX_XVFTINTRZ_W_D
+  UNSPEC_LASX_XVFTINTRP_W_D
+  UNSPEC_LASX_XVFTINTRM_W_D
+  UNSPEC_LASX_XVFTINTRNE_W_D
+  UNSPEC_LASX_XVFTINTH_L_S
+  UNSPEC_LASX_XVFTINTL_L_S
+  UNSPEC_LASX_XVFFINTH_D_W
+  UNSPEC_LASX_XVFFINTL_D_W
+  UNSPEC_LASX_XVFTINTRZH_L_S
+  UNSPEC_LASX_XVFTINTRZL_L_S
+  UNSPEC_LASX_XVFTINTRPH_L_S
+  UNSPEC_LASX_XVFTINTRPL_L_S
+  UNSPEC_LASX_XVFTINTRMH_L_S
+  UNSPEC_LASX_XVFTINTRML_L_S
+  UNSPEC_LASX_XVFTINTRNEL_L_S
+  UNSPEC_LASX_XVFTINTRNEH_L_S
+  UNSPEC_LASX_XVFRINTRNE_S
+  UNSPEC_LASX_XVFRINTRNE_D
+  UNSPEC_LASX_XVFRINTRZ_S
+  UNSPEC_LASX_XVFRINTRZ_D
+  UNSPEC_LASX_XVFRINTRP_S
+  UNSPEC_LASX_XVFRINTRP_D
+  UNSPEC_LASX_XVFRINTRM_S
+  UNSPEC_LASX_XVFRINTRM_D
+  UNSPEC_LASX_XVREPLVE0_Q
+  UNSPEC_LASX_XVPERM_W
+  UNSPEC_LASX_XVPERMI_Q
+  UNSPEC_LASX_XVPERMI_D
+
+  UNSPEC_LASX_XVADDWEV
+  UNSPEC_LASX_XVADDWEV2
+  UNSPEC_LASX_XVADDWEV3
+  UNSPEC_LASX_XVSUBWEV
+  UNSPEC_LASX_XVSUBWEV2
+  UNSPEC_LASX_XVMULWEV
+  UNSPEC_LASX_XVMULWEV2
+  UNSPEC_LASX_XVMULWEV3
+  UNSPEC_LASX_XVADDWOD
+  UNSPEC_LASX_XVADDWOD2
+  UNSPEC_LASX_XVADDWOD3
+  UNSPEC_LASX_XVSUBWOD
+  UNSPEC_LASX_XVSUBWOD2
+  UNSPEC_LASX_XVMULWOD
+  UNSPEC_LASX_XVMULWOD2
+  UNSPEC_LASX_XVMULWOD3
+  UNSPEC_LASX_XVMADDWEV
+  UNSPEC_LASX_XVMADDWEV2
+  UNSPEC_LASX_XVMADDWEV3
+  UNSPEC_LASX_XVMADDWOD
+  UNSPEC_LASX_XVMADDWOD2
+  UNSPEC_LASX_XVMADDWOD3
+  UNSPEC_LASX_XVHADDW_Q_D
+  UNSPEC_LASX_XVHSUBW_Q_D
+  UNSPEC_LASX_XVHADDW_QU_DU
+  UNSPEC_LASX_XVHSUBW_QU_DU
+  UNSPEC_LASX_XVROTR
+  UNSPEC_LASX_XVADD_Q
+  UNSPEC_LASX_XVSUB_Q
+  UNSPEC_LASX_XVREPLVE
+  UNSPEC_LASX_XVSHUF4
+  UNSPEC_LASX_XVMSKGEZ
+  UNSPEC_LASX_XVMSKNZ
+  UNSPEC_LASX_XVEXTH_Q_D
+  UNSPEC_LASX_XVEXTH_QU_DU
+  UNSPEC_LASX_XVEXTL_Q_D
+  UNSPEC_LASX_XVSRLNI
+  UNSPEC_LASX_XVSRLRNI
+  UNSPEC_LASX_XVSSRLNI
+  UNSPEC_LASX_XVSSRLNI2
+  UNSPEC_LASX_XVSSRLRNI
+  UNSPEC_LASX_XVSSRLRNI2
+  UNSPEC_LASX_XVSRANI
+  UNSPEC_LASX_XVSRARNI
+  UNSPEC_LASX_XVSSRANI
+  UNSPEC_LASX_XVSSRANI2
+  UNSPEC_LASX_XVSSRARNI
+  UNSPEC_LASX_XVSSRARNI2
+  UNSPEC_LASX_XVPERMI
+  UNSPEC_LASX_XVINSVE0
+  UNSPEC_LASX_XVPICKVE
+  UNSPEC_LASX_XVSSRLN
+  UNSPEC_LASX_XVSSRLRN
+  UNSPEC_LASX_XVEXTL_QU_DU
+  UNSPEC_LASX_XVLDI
+  UNSPEC_LASX_XVLDX
+  UNSPEC_LASX_XVSTX
+])
+
+;; All vector modes with 256 bits.
+(define_mode_iterator LASX [V4DF V8SF V4DI V8SI V16HI V32QI])
+
+;; Same as LASX.  Used by vcond to iterate two modes.
+(define_mode_iterator LASX_2 [V4DF V8SF V4DI V8SI V16HI V32QI])
+
+;; Only used for splitting insert_d and copy_{u,s}.d.
+(define_mode_iterator LASX_D [V4DI V4DF])
+
+;; Only used for splitting insert_d and copy_{u,s}.d.
+(define_mode_iterator LASX_WD [V4DI V4DF V8SI V8SF])
+
+;; Only used for copy256_{u,s}.w.
+(define_mode_iterator LASX_W    [V8SI V8SF])
+
+;; Only integer modes in LASX.
+(define_mode_iterator ILASX [V4DI V8SI V16HI V32QI])
+
+;; As ILASX but excludes V32QI.
+(define_mode_iterator ILASX_DWH [V4DI V8SI V16HI])
+
+;; As LASX but excludes V32QI.
+(define_mode_iterator LASX_DWH [V4DF V8SF V4DI V8SI V16HI])
+
+;; As ILASX but excludes V4DI.
+(define_mode_iterator ILASX_WHB [V8SI V16HI V32QI])
+
+;; Only integer modes equal or larger than a word.
+(define_mode_iterator ILASX_DW  [V4DI V8SI])
+
+;; Only integer modes smaller than a word.
+(define_mode_iterator ILASX_HB  [V16HI V32QI])
+
+;; Only floating-point modes in LASX.
+(define_mode_iterator FLASX  [V4DF V8SF])
+
+;; Only used for immediate set shuffle elements instruction.
+(define_mode_iterator LASX_WHB_W [V8SI V16HI V32QI V8SF])
+
+;; The attribute gives the integer vector mode with same size in Loongson ASX.
+(define_mode_attr VIMODE256
+  [(V4DF "V4DI")
+   (V8SF "V8SI")
+   (V4DI "V4DI")
+   (V8SI "V8SI")
+   (V16HI "V16HI")
+   (V32QI "V32QI")])
+
+;;attribute gives half modes for vector modes.
+;;attribute gives half modes (Same Size) for vector modes.
+(define_mode_attr VHSMODE256
+  [(V16HI "V32QI")
+   (V8SI "V16HI")
+   (V4DI "V8SI")])
+
+;;attribute gives half modes  for vector modes.
+(define_mode_attr VHMODE256
+  [(V32QI "V16QI")
+   (V16HI "V8HI")
+   (V8SI "V4SI")
+   (V4DI "V2DI")])
+
+;;attribute gives half float modes for vector modes.
+(define_mode_attr VFHMODE256
+   [(V8SF "V4SF")
+   (V4DF "V2DF")])
+
+;; The attribute gives double modes for vector modes in LASX.
+(define_mode_attr VDMODE256
+  [(V8SI "V4DI")
+   (V16HI "V8SI")
+   (V32QI "V16HI")])
+
+;; extended from VDMODE256
+(define_mode_attr VDMODEEXD256
+  [(V4DI "V4DI")
+   (V8SI "V4DI")
+   (V16HI "V8SI")
+   (V32QI "V16HI")])
+
+;; The attribute gives half modes with same number of elements for vector modes.
+(define_mode_attr VTRUNCMODE256
+  [(V16HI "V16QI")
+   (V8SI "V8HI")
+   (V4DI "V4SI")])
+
+;; This attribute gives the mode of the result for "copy_s_b, copy_u_b" etc.
+(define_mode_attr VRES256
+  [(V4DF "DF")
+   (V8SF "SF")
+   (V4DI "DI")
+   (V8SI "SI")
+   (V16HI "SI")
+   (V32QI "SI")])
+
+;; Only used with LASX_D iterator.
+(define_mode_attr lasx_d
+  [(V4DI "reg_or_0")
+   (V4DF "register")])
+
+;; This attribute gives the 256 bit integer vector mode with same size.
+(define_mode_attr mode256_i
+  [(V4DF "v4di")
+   (V8SF "v8si")
+   (V4DI "v4di")
+   (V8SI "v8si")
+   (V16HI "v16hi")
+   (V32QI "v32qi")])
+
+
+;; This attribute gives the 256 bit float vector mode with same size.
+(define_mode_attr mode256_f
+  [(V4DF "v4df")
+   (V8SF "v8sf")
+   (V4DI "v4df")
+   (V8SI "v8sf")])
+
+ ;; This attribute gives suffix for LASX instructions.  HOW?
+(define_mode_attr lasxfmt
+  [(V4DF "d")
+   (V8SF "w")
+   (V4DI "d")
+   (V8SI "w")
+   (V16HI "h")
+   (V32QI "b")])
+
+(define_mode_attr flasxfmt
+  [(V4DF "d")
+   (V8SF "s")])
+
+(define_mode_attr lasxfmt_u
+  [(V4DF "du")
+   (V8SF "wu")
+   (V4DI "du")
+   (V8SI "wu")
+   (V16HI "hu")
+   (V32QI "bu")])
+
+(define_mode_attr ilasxfmt
+  [(V4DF "l")
+   (V8SF "w")])
+
+(define_mode_attr ilasxfmt_u
+  [(V4DF "lu")
+   (V8SF "wu")])
+
+;; This attribute gives suffix for integers in VHMODE256.
+(define_mode_attr hlasxfmt
+  [(V4DI "w")
+   (V8SI "h")
+   (V16HI "b")])
+
+(define_mode_attr hlasxfmt_u
+  [(V4DI "wu")
+   (V8SI "hu")
+   (V16HI "bu")])
+
+;; This attribute gives suffix for integers in VHSMODE256.
+(define_mode_attr hslasxfmt
+  [(V4DI "w")
+   (V8SI "h")
+   (V16HI "b")])
+
+;; This attribute gives define_insn suffix for LASX instructions that need
+;; distinction between integer and floating point.
+(define_mode_attr lasxfmt_f
+  [(V4DF "d_f")
+   (V8SF "w_f")
+   (V4DI "d")
+   (V8SI "w")
+   (V16HI "h")
+   (V32QI "b")])
+
+(define_mode_attr flasxfmt_f
+  [(V4DF "d_f")
+   (V8SF "s_f")
+   (V4DI "d")
+   (V8SI "w")
+   (V16HI "h")
+   (V32QI "b")])
+
+;; This attribute gives define_insn suffix for LASX instructions that need
+;; distinction between integer and floating point.
+(define_mode_attr lasxfmt_f_wd
+  [(V4DF "d_f")
+   (V8SF "w_f")
+   (V4DI "d")
+   (V8SI "w")])
+
+;; This attribute gives suffix for integers in VHMODE256.
+(define_mode_attr dlasxfmt
+  [(V8SI "d")
+   (V16HI "w")
+   (V32QI "h")])
+
+(define_mode_attr dlasxfmt_u
+  [(V8SI "du")
+   (V16HI "wu")
+   (V32QI "hu")])
+
+;; for VDMODEEXD256
+(define_mode_attr dlasxqfmt
+  [(V4DI "q")
+   (V8SI "d")
+   (V16HI "w")
+   (V32QI "h")])
+
+;; This is used to form an immediate operand constraint using
+;; "const_<indeximm256>_operand".
+(define_mode_attr indeximm256
+  [(V4DF "0_to_3")
+   (V8SF "0_to_7")
+   (V4DI "0_to_3")
+   (V8SI "0_to_7")
+   (V16HI "uimm4")
+   (V32QI "uimm5")])
+
+;; This is used to form an immediate operand constraint using to ref high half
+;; "const_<indeximm_hi>_operand".
+(define_mode_attr indeximm_hi
+  [(V4DF "2_or_3")
+   (V8SF "4_to_7")
+   (V4DI "2_or_3")
+   (V8SI "4_to_7")
+   (V16HI "8_to_15")
+   (V32QI "16_to_31")])
+
+;; This is used to form an immediate operand constraint using to ref low half
+;; "const_<indeximm_lo>_operand".
+(define_mode_attr indeximm_lo
+  [(V4DF "0_or_1")
+   (V8SF "0_to_3")
+   (V4DI "0_or_1")
+   (V8SI "0_to_3")
+   (V16HI "uimm3")
+   (V32QI "uimm4")])
+
+;; This attribute represents bitmask needed for vec_merge using in lasx
+;; "const_<bitmask256>_operand".
+(define_mode_attr bitmask256
+  [(V4DF "exp_4")
+   (V8SF "exp_8")
+   (V4DI "exp_4")
+   (V8SI "exp_8")
+   (V16HI "exp_16")
+   (V32QI "exp_32")])
+
+;; This attribute represents bitmask needed for vec_merge using to ref low half
+;; "const_<bitmask_lo>_operand".
+(define_mode_attr bitmask_lo
+  [(V4DF "exp_2")
+   (V8SF "exp_4")
+   (V4DI "exp_2")
+   (V8SI "exp_4")
+   (V16HI "exp_8")
+   (V32QI "exp_16")])
+
+
+;; This attribute is used to form an immediate operand constraint using
+;; "const_<bitimm256>_operand".
+(define_mode_attr bitimm256
+  [(V32QI "uimm3")
+   (V16HI  "uimm4")
+   (V8SI  "uimm5")
+   (V4DI  "uimm6")])
+
+
+(define_mode_attr d2lasxfmt
+  [(V8SI "q")
+   (V16HI "d")
+   (V32QI "w")])
+
+(define_mode_attr d2lasxfmt_u
+  [(V8SI "qu")
+   (V16HI "du")
+   (V32QI "wu")])
+
+(define_mode_attr VD2MODE256
+  [(V8SI "V4DI")
+   (V16HI "V4DI")
+   (V32QI "V8SI")])
+
+(define_mode_attr lasxfmt_wd
+  [(V4DI "d")
+   (V8SI "w")
+   (V16HI "w")
+   (V32QI "w")])
+
+(define_int_iterator FRINT256_S [UNSPEC_LASX_XVFRINTRP_S
+			       UNSPEC_LASX_XVFRINTRZ_S
+			       UNSPEC_LASX_XVFRINT
+			       UNSPEC_LASX_XVFRINTRM_S])
+
+(define_int_iterator FRINT256_D [UNSPEC_LASX_XVFRINTRP_D
+			       UNSPEC_LASX_XVFRINTRZ_D
+			       UNSPEC_LASX_XVFRINT
+			       UNSPEC_LASX_XVFRINTRM_D])
+
+(define_int_attr frint256_pattern_s
+  [(UNSPEC_LASX_XVFRINTRP_S  "ceil")
+   (UNSPEC_LASX_XVFRINTRZ_S  "btrunc")
+   (UNSPEC_LASX_XVFRINT	     "rint")
+   (UNSPEC_LASX_XVFRINTRM_S  "floor")])
+
+(define_int_attr frint256_pattern_d
+  [(UNSPEC_LASX_XVFRINTRP_D  "ceil")
+   (UNSPEC_LASX_XVFRINTRZ_D  "btrunc")
+   (UNSPEC_LASX_XVFRINT	     "rint")
+   (UNSPEC_LASX_XVFRINTRM_D  "floor")])
+
+(define_int_attr frint256_suffix
+  [(UNSPEC_LASX_XVFRINTRP_S  "rp")
+   (UNSPEC_LASX_XVFRINTRP_D  "rp")
+   (UNSPEC_LASX_XVFRINTRZ_S  "rz")
+   (UNSPEC_LASX_XVFRINTRZ_D  "rz")
+   (UNSPEC_LASX_XVFRINT	     "")
+   (UNSPEC_LASX_XVFRINTRM_S  "rm")
+   (UNSPEC_LASX_XVFRINTRM_D  "rm")])
+
+(define_expand "vec_init<mode><unitmode>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand:LASX 1 "")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vector_init (operands[0], operands[1]);
+  DONE;
+})
+
+(define_expand "vec_initv32qiv16qi"
+ [(match_operand:V32QI 0 "register_operand")
+  (match_operand:V16QI 1 "")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vector_group_init (operands[0], operands[1]);
+  DONE;
+})
+
+;; FIXME: Delete.
+(define_insn "vec_pack_trunc_<mode>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(vec_concat:<VHSMODE256>
+	  (truncate:<VTRUNCMODE256>
+	    (match_operand:ILASX_DWH 1 "register_operand" "f"))
+	  (truncate:<VTRUNCMODE256>
+	    (match_operand:ILASX_DWH 2 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvpickev.<hslasxfmt>\t%u0,%u2,%u1\n\txvpermi.d\t%u0,%u0,0xd8"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "8")])
+
+(define_expand "vec_unpacks_hi_v8sf"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(float_extend:V4DF
+	  (vec_select:V4SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_dup 2))))]
+  "ISA_HAS_LASX"
+{
+  operands[2] = loongarch_lsx_vec_parallel_const_half (V8SFmode,
+						       true/*high_p*/);
+})
+
+(define_expand "vec_unpacks_lo_v8sf"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(float_extend:V4DF
+	  (vec_select:V4SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_dup 2))))]
+  "ISA_HAS_LASX"
+{
+  operands[2] = loongarch_lsx_vec_parallel_const_half (V8SFmode,
+						       false/*high_p*/);
+})
+
+(define_expand "vec_unpacks_hi_<mode>"
+  [(match_operand:<VDMODE256> 0 "register_operand")
+   (match_operand:ILASX_WHB 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vec_unpack (operands, false/*unsigned_p*/,
+			       true/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_unpacks_lo_<mode>"
+  [(match_operand:<VDMODE256> 0 "register_operand")
+   (match_operand:ILASX_WHB 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vec_unpack (operands, false/*unsigned_p*/, false/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_unpacku_hi_<mode>"
+  [(match_operand:<VDMODE256> 0 "register_operand")
+   (match_operand:ILASX_WHB 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vec_unpack (operands, true/*unsigned_p*/, true/*high_p*/);
+  DONE;
+})
+
+(define_expand "vec_unpacku_lo_<mode>"
+  [(match_operand:<VDMODE256> 0 "register_operand")
+   (match_operand:ILASX_WHB 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vec_unpack (operands, true/*unsigned_p*/, false/*high_p*/);
+  DONE;
+})
+
+(define_insn "lasx_xvinsgr2vr_<lasxfmt_f_wd>"
+  [(set (match_operand:ILASX_DW 0 "register_operand" "=f")
+	(vec_merge:ILASX_DW
+	  (vec_duplicate:ILASX_DW
+	    (match_operand:<UNITMODE> 1 "reg_or_0_operand" "rJ"))
+	  (match_operand:ILASX_DW 2 "register_operand" "0")
+	  (match_operand 3 "const_<bitmask256>_operand" "")))]
+  "ISA_HAS_LASX"
+{
+#if 0
+  if (!TARGET_64BIT && (<MODE>mode == V4DImode || <MODE>mode == V4DFmode))
+    return "#";
+  else
+#endif
+    return "xvinsgr2vr.<lasxfmt>\t%u0,%z1,%y3";
+}
+  [(set_attr "type" "simd_insert")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "vec_concatv4di"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(vec_concat:V4DI
+	  (match_operand:V2DI 1 "register_operand" "0")
+	  (match_operand:V2DI 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return "xvpermi.q\t%u0,%u2,0x20";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "vec_concatv8si"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_concat:V8SI
+	  (match_operand:V4SI 1 "register_operand" "0")
+	  (match_operand:V4SI 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return "xvpermi.q\t%u0,%u2,0x20";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "vec_concatv16hi"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_concat:V16HI
+	  (match_operand:V8HI 1 "register_operand" "0")
+	  (match_operand:V8HI 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return "xvpermi.q\t%u0,%u2,0x20";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "vec_concatv32qi"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_concat:V32QI
+	  (match_operand:V16QI 1 "register_operand" "0")
+	  (match_operand:V16QI 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return "xvpermi.q\t%u0,%u2,0x20";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "vec_concatv4df"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(vec_concat:V4DF
+	  (match_operand:V2DF 1 "register_operand" "0")
+	  (match_operand:V2DF 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return "xvpermi.q\t%u0,%u2,0x20";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "vec_concatv8sf"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_concat:V8SF
+	  (match_operand:V4SF 1 "register_operand" "0")
+	  (match_operand:V4SF 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return "xvpermi.q\t%u0,%u2,0x20";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DI")])
+
+;; xshuf.w
+(define_insn "lasx_xvperm_<lasxfmt_f_wd>"
+  [(set (match_operand:LASX_W 0 "register_operand" "=f")
+	(unspec:LASX_W
+	  [(match_operand:LASX_W 1 "nonimmediate_operand" "f")
+	   (match_operand:V8SI 2 "register_operand" "f")]
+	  UNSPEC_LASX_XVPERM_W))]
+  "ISA_HAS_LASX"
+  "xvperm.w\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+;; xvpermi.d
+(define_insn "lasx_xvpermi_d_<LASX:mode>"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	  (unspec:LASX
+	    [(match_operand:LASX 1 "register_operand" "f")
+	     (match_operand:SI     2 "const_uimm8_operand")]
+	    UNSPEC_LASX_XVPERMI_D))]
+  "ISA_HAS_LASX"
+  "xvpermi.d\t%u0,%u1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvpermi_d_<mode>_1"
+  [(set (match_operand:LASX_D 0 "register_operand" "=f")
+	(vec_select:LASX_D
+	 (match_operand:LASX_D 1 "register_operand" "f")
+	 (parallel [(match_operand 2 "const_0_to_3_operand")
+		    (match_operand 3 "const_0_to_3_operand")
+		    (match_operand 4 "const_0_to_3_operand")
+		    (match_operand 5 "const_0_to_3_operand")])))]
+  "ISA_HAS_LASX"
+{
+  int mask = 0;
+  mask |= INTVAL (operands[2]) << 0;
+  mask |= INTVAL (operands[3]) << 2;
+  mask |= INTVAL (operands[4]) << 4;
+  mask |= INTVAL (operands[5]) << 6;
+  operands[2] = GEN_INT (mask);
+  return "xvpermi.d\t%u0,%u1,%2";
+}
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+;; xvpermi.q
+(define_insn "lasx_xvpermi_q_<LASX:mode>"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(unspec:LASX
+	  [(match_operand:LASX 1 "register_operand" "0")
+	   (match_operand:LASX 2 "register_operand" "f")
+	   (match_operand     3 "const_uimm8_operand")]
+	  UNSPEC_LASX_XVPERMI_Q))]
+  "ISA_HAS_LASX"
+  "xvpermi.q\t%u0,%u2,%3"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvpickve2gr_d<u>"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+	(any_extend:DI
+	  (vec_select:DI
+	    (match_operand:V4DI 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_0_to_3_operand" "")]))))]
+  "ISA_HAS_LASX"
+  "xvpickve2gr.d<u>\t%0,%u1,%2"
+  [(set_attr "type" "simd_copy")
+   (set_attr "mode" "V4DI")])
+
+(define_expand "vec_set<mode>"
+  [(match_operand:ILASX_DW 0 "register_operand")
+   (match_operand:<UNITMODE> 1 "reg_or_0_operand")
+   (match_operand 2 "const_<indeximm256>_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx index = GEN_INT (1 << INTVAL (operands[2]));
+  emit_insn (gen_lasx_xvinsgr2vr_<lasxfmt_f_wd> (operands[0], operands[1],
+                      operands[0], index));
+  DONE;
+})
+
+(define_expand "vec_set<mode>"
+  [(match_operand:FLASX 0 "register_operand")
+   (match_operand:<UNITMODE> 1 "reg_or_0_operand")
+   (match_operand 2 "const_<indeximm256>_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx index = GEN_INT (1 << INTVAL (operands[2]));
+  emit_insn (gen_lasx_xvinsve0_<lasxfmt_f>_scalar (operands[0], operands[1],
+                      operands[0], index));
+  DONE;
+})
+
+(define_expand "vec_extract<mode><unitmode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:LASX 1 "register_operand")
+   (match_operand 2 "const_<indeximm256>_operand")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vector_extract (operands[0], operands[1],
+      INTVAL (operands[2]));
+  DONE;
+})
+
+(define_expand "vec_perm<mode>"
+ [(match_operand:LASX 0 "register_operand")
+  (match_operand:LASX 1 "register_operand")
+  (match_operand:LASX 2 "register_operand")
+  (match_operand:<VIMODE256> 3 "register_operand")]
+  "ISA_HAS_LASX"
+{
+   loongarch_expand_vec_perm_1 (operands);
+   DONE;
+})
+
+;; FIXME: 256??
+(define_expand "vcondu<LASX:mode><ILASX:mode>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand:LASX 1 "reg_or_m1_operand")
+   (match_operand:LASX 2 "reg_or_0_operand")
+   (match_operator 3 ""
+    [(match_operand:ILASX 4 "register_operand")
+     (match_operand:ILASX 5 "register_operand")])]
+  "ISA_HAS_LASX
+   && (GET_MODE_NUNITS (<LASX:MODE>mode)
+       == GET_MODE_NUNITS (<ILASX:MODE>mode))"
+{
+  loongarch_expand_vec_cond_expr (<LASX:MODE>mode, <LASX:VIMODE256>mode,
+				  operands);
+  DONE;
+})
+
+;; FIXME: 256??
+(define_expand "vcond<LASX:mode><LASX_2:mode>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand:LASX 1 "reg_or_m1_operand")
+   (match_operand:LASX 2 "reg_or_0_operand")
+   (match_operator 3 ""
+     [(match_operand:LASX_2 4 "register_operand")
+      (match_operand:LASX_2 5 "register_operand")])]
+  "ISA_HAS_LASX
+   && (GET_MODE_NUNITS (<LASX:MODE>mode)
+       == GET_MODE_NUNITS (<LASX_2:MODE>mode))"
+{
+  loongarch_expand_vec_cond_expr (<LASX:MODE>mode, <LASX:VIMODE256>mode,
+				  operands);
+  DONE;
+})
+
+;; Same as vcond_
+(define_expand "vcond_mask_<ILASX:mode><ILASX:mode>"
+  [(match_operand:ILASX 0 "register_operand")
+   (match_operand:ILASX 1 "reg_or_m1_operand")
+   (match_operand:ILASX 2 "reg_or_0_operand")
+   (match_operand:ILASX 3 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  loongarch_expand_vec_cond_mask_expr (<ILASX:MODE>mode,
+				      <ILASX:VIMODE256>mode, operands);
+  DONE;
+})
+
+(define_expand "lasx_xvrepli<mode>"
+  [(match_operand:ILASX 0 "register_operand")
+   (match_operand 1 "const_imm10_operand")]
+  "ISA_HAS_LASX"
+{
+  if (<MODE>mode == V32QImode)
+    operands[1] = GEN_INT (trunc_int_for_mode (INTVAL (operands[1]),
+					       <UNITMODE>mode));
+  emit_move_insn (operands[0],
+  loongarch_gen_const_int_vector (<MODE>mode, INTVAL (operands[1])));
+  DONE;
+})
+
+(define_expand "mov<mode>"
+  [(set (match_operand:LASX 0)
+	(match_operand:LASX 1))]
+  "ISA_HAS_LASX"
+{
+  if (loongarch_legitimize_move (<MODE>mode, operands[0], operands[1]))
+    DONE;
+})
+
+
+(define_expand "movmisalign<mode>"
+  [(set (match_operand:LASX 0)
+	(match_operand:LASX 1))]
+  "ISA_HAS_LASX"
+{
+  if (loongarch_legitimize_move (<MODE>mode, operands[0], operands[1]))
+    DONE;
+})
+
+;; 256-bit LASX modes can only exist in LASX registers or memory.
+(define_insn "mov<mode>_lasx"
+  [(set (match_operand:LASX 0 "nonimmediate_operand" "=f,f,R,*r,*f")
+	(match_operand:LASX 1 "move_operand" "fYGYI,R,f,*f,*r"))]
+  "ISA_HAS_LASX"
+  { return loongarch_output_move (operands[0], operands[1]); }
+  [(set_attr "type" "simd_move,simd_load,simd_store,simd_copy,simd_insert")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "8,4,4,4,4")])
+
+
+(define_split
+  [(set (match_operand:LASX 0 "nonimmediate_operand")
+	(match_operand:LASX 1 "move_operand"))]
+  "reload_completed && ISA_HAS_LASX
+   && loongarch_split_move_insn_p (operands[0], operands[1])"
+  [(const_int 0)]
+{
+  loongarch_split_move_insn (operands[0], operands[1], curr_insn);
+  DONE;
+})
+
+;; Offset load
+(define_expand "lasx_mxld_<lasxfmt_f>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq10<lasxfmt>_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+				      INTVAL (operands[2]));
+  loongarch_emit_move (operands[0], gen_rtx_MEM (<MODE>mode, addr));
+  DONE;
+})
+
+;; Offset store
+(define_expand "lasx_mxst_<lasxfmt_f>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq10<lasxfmt>_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (gen_rtx_MEM (<MODE>mode, addr), operands[0]);
+  DONE;
+})
+
+;; LASX
+(define_insn "add<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f,f")
+	(plus:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_ximm5_operand" "f,Unv5,Uuv5")))]
+  "ISA_HAS_LASX"
+{
+  switch (which_alternative)
+    {
+    case 0:
+      return "xvadd.<lasxfmt>\t%u0,%u1,%u2";
+    case 1:
+      {
+	HOST_WIDE_INT val = INTVAL (CONST_VECTOR_ELT (operands[2], 0));
+
+	operands[2] = GEN_INT (-val);
+	return "xvsubi.<lasxfmt_u>\t%u0,%u1,%d2";
+      }
+    case 2:
+      return "xvaddi.<lasxfmt_u>\t%u0,%u1,%E2";
+    default:
+      gcc_unreachable ();
+    }
+}
+  [(set_attr "alu_type" "simd_add")
+   (set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "sub<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(minus:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_uimm5_operand" "f,Uuv5")))]
+  "ISA_HAS_LASX"
+  "@
+   xvsub.<lasxfmt>\t%u0,%u1,%u2
+   xvsubi.<lasxfmt_u>\t%u0,%u1,%E2"
+  [(set_attr "alu_type" "simd_add")
+   (set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "mul<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(mult:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		    (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvmul.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_mul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvmadd_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(plus:ILASX (mult:ILASX (match_operand:ILASX 2 "register_operand" "f")
+				(match_operand:ILASX 3 "register_operand" "f"))
+		    (match_operand:ILASX 1 "register_operand" "0")))]
+  "ISA_HAS_LASX"
+  "xvmadd.<lasxfmt>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_mul")
+   (set_attr "mode" "<MODE>")])
+
+
+
+(define_insn "lasx_xvmsub_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(minus:ILASX (match_operand:ILASX 1 "register_operand" "0")
+		     (mult:ILASX (match_operand:ILASX 2 "register_operand" "f")
+				 (match_operand:ILASX 3 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvmsub.<lasxfmt>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_mul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "div<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(div:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		   (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return loongarch_lsx_output_division ("xvdiv.<lasxfmt>\t%u0,%u1,%u2",
+					operands);
+}
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "udiv<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(udiv:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		    (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return loongarch_lsx_output_division ("xvdiv.<lasxfmt_u>\t%u0,%u1,%u2",
+					operands);
+}
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "mod<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(mod:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		   (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return loongarch_lsx_output_division ("xvmod.<lasxfmt>\t%u0,%u1,%u2",
+					operands);
+}
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "umod<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(umod:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		    (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+{
+  return loongarch_lsx_output_division ("xvmod.<lasxfmt_u>\t%u0,%u1,%u2",
+					operands);
+}
+  [(set_attr "type" "simd_div")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "xor<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f,f")
+	(xor:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_val_operand" "f,YC,Urv8")))]
+  "ISA_HAS_LASX"
+  "@
+   xvxor.v\t%u0,%u1,%u2
+   xvbitrevi.%v0\t%u0,%u1,%V2
+   xvxori.b\t%u0,%u1,%B2"
+  [(set_attr "type" "simd_logic,simd_bit,simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "ior<mode>3"
+  [(set (match_operand:LASX 0 "register_operand" "=f,f,f")
+	(ior:LASX
+	  (match_operand:LASX 1 "register_operand" "f,f,f")
+	  (match_operand:LASX 2 "reg_or_vector_same_val_operand" "f,YC,Urv8")))]
+  "ISA_HAS_LASX"
+  "@
+   xvor.v\t%u0,%u1,%u2
+   xvbitseti.%v0\t%u0,%u1,%V2
+   xvori.b\t%u0,%u1,%B2"
+  [(set_attr "type" "simd_logic,simd_bit,simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "and<mode>3"
+  [(set (match_operand:LASX 0 "register_operand" "=f,f,f")
+	(and:LASX
+	  (match_operand:LASX 1 "register_operand" "f,f,f")
+	  (match_operand:LASX 2 "reg_or_vector_same_val_operand" "f,YZ,Urv8")))]
+  "ISA_HAS_LASX"
+{
+  switch (which_alternative)
+    {
+    case 0:
+      return "xvand.v\t%u0,%u1,%u2";
+    case 1:
+      {
+	rtx elt0 = CONST_VECTOR_ELT (operands[2], 0);
+	unsigned HOST_WIDE_INT val = ~UINTVAL (elt0);
+	operands[2] = loongarch_gen_const_int_vector (<MODE>mode, val & (-val));
+	return "xvbitclri.%v0\t%u0,%u1,%V2";
+      }
+    case 2:
+      return "xvandi.b\t%u0,%u1,%B2";
+    default:
+      gcc_unreachable ();
+    }
+}
+  [(set_attr "type" "simd_logic,simd_bit,simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "one_cmpl<mode>2"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(not:ILASX (match_operand:ILASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvnor.v\t%u0,%u1,%u1"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V32QI")])
+
+;; LASX
+(define_insn "vlshr<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(lshiftrt:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_uimm6_operand" "f,Uuv6")))]
+  "ISA_HAS_LASX"
+  "@
+   xvsrl.<lasxfmt>\t%u0,%u1,%u2
+   xvsrli.<lasxfmt>\t%u0,%u1,%E2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; LASX ">>"
+(define_insn "vashr<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(ashiftrt:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_uimm6_operand" "f,Uuv6")))]
+  "ISA_HAS_LASX"
+  "@
+   xvsra.<lasxfmt>\t%u0,%u1,%u2
+   xvsrai.<lasxfmt>\t%u0,%u1,%E2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; LASX "<<"
+(define_insn "vashl<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(ashift:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_uimm6_operand" "f,Uuv6")))]
+  "ISA_HAS_LASX"
+  "@
+   xvsll.<lasxfmt>\t%u0,%u1,%u2
+   xvslli.<lasxfmt>\t%u0,%u1,%E2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "add<mode>3"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(plus:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		    (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfadd.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "sub<mode>3"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(minus:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		     (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfsub.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "mul<mode>3"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(mult:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		    (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfmul.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fmul")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "div<mode>3"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(div:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		   (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfdiv.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fma<mode>4"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(fma:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		   (match_operand:FLASX 2 "register_operand" "f")
+		   (match_operand:FLASX 3 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfmadd.<flasxfmt>\t%u0,%u1,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fnma<mode>4"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(fma:FLASX (neg:FLASX (match_operand:FLASX 1 "register_operand" "f"))
+		   (match_operand:FLASX 2 "register_operand" "f")
+		   (match_operand:FLASX 3 "register_operand" "0")))]
+  "ISA_HAS_LASX"
+  "xvfnmsub.<flasxfmt>\t%u0,%u1,%u2,%u0"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "sqrt<mode>2"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(sqrt:FLASX (match_operand:FLASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfsqrt.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvadda_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(plus:ILASX (abs:ILASX (match_operand:ILASX 1 "register_operand" "f"))
+		    (abs:ILASX (match_operand:ILASX 2 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvadda.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "ssadd<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(ss_plus:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvsadd.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "usadd<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(us_plus:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvsadd.<lasxfmt_u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvabsd_s_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(abs:ILASX (minus:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvabsd.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvabsd_u_<lasxfmt_u>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVABSD_U))]
+  "ISA_HAS_LASX"
+  "xvabsd.<lasxfmt_u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvavg_s_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVAVG_S))]
+  "ISA_HAS_LASX"
+  "xvavg.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvavg_u_<lasxfmt_u>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVAVG_U))]
+  "ISA_HAS_LASX"
+  "xvavg.<lasxfmt_u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvavgr_s_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVAVGR_S))]
+  "ISA_HAS_LASX"
+  "xvavgr.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvavgr_u_<lasxfmt_u>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVAVGR_U))]
+  "ISA_HAS_LASX"
+  "xvavgr.<lasxfmt_u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitclr_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVBITCLR))]
+  "ISA_HAS_LASX"
+  "xvbitclr.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitclri_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")]
+		      UNSPEC_LASX_XVBITCLRI))]
+  "ISA_HAS_LASX"
+  "xvbitclri.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitrev_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVBITREV))]
+  "ISA_HAS_LASX"
+  "xvbitrev.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitrevi_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")]
+		     UNSPEC_LASX_XVBITREVI))]
+  "ISA_HAS_LASX"
+  "xvbitrevi.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitsel_<lasxfmt_f>"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(ior:LASX (and:LASX (not:LASX
+			      (match_operand:LASX 3 "register_operand" "f"))
+			      (match_operand:LASX 1 "register_operand" "f"))
+		  (and:LASX (match_dup 3)
+			    (match_operand:LASX 2 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvbitsel.v\t%u0,%u1,%u2,%u3"
+  [(set_attr "type" "simd_bitmov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitseli_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(ior:V32QI (and:V32QI (not:V32QI
+				(match_operand:V32QI 1 "register_operand" "0"))
+			      (match_operand:V32QI 2 "register_operand" "f"))
+		   (and:V32QI (match_dup 1)
+			      (match_operand:V32QI 3 "const_vector_same_val_operand" "Urv8"))))]
+  "ISA_HAS_LASX"
+  "xvbitseli.b\t%u0,%u2,%B3"
+  [(set_attr "type" "simd_bitmov")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvbitset_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVBITSET))]
+  "ISA_HAS_LASX"
+  "xvbitset.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbitseti_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")]
+		      UNSPEC_LASX_XVBITSETI))]
+  "ISA_HAS_LASX"
+  "xvbitseti.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvs<ICC:icc>_<ILASX:lasxfmt><cmpi_1>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(ICC:ILASX
+	  (match_operand:ILASX 1 "register_operand" "f,f")
+	  (match_operand:ILASX 2 "reg_or_vector_same_<ICC:cmpi>imm5_operand" "f,U<ICC:cmpi>v5")))]
+  "ISA_HAS_LASX"
+  "@
+   xvs<ICC:icc>.<ILASX:lasxfmt><cmpi_1>\t%u0,%u1,%u2
+   xvs<ICC:icci>.<ILASX:lasxfmt><cmpi_1>\t%u0,%u1,%E2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "vec_cmp<mode><mode256_i>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand")
+	(match_operator 1 ""
+	  [(match_operand:LASX 2 "register_operand")
+	   (match_operand:LASX 3 "register_operand")]))]
+  "ISA_HAS_LASX"
+{
+  bool ok = loongarch_expand_vec_cmp (operands);
+  gcc_assert (ok);
+  DONE;
+})
+
+(define_expand "vec_cmpu<ILASX:mode><mode256_i>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand")
+	(match_operator 1 ""
+	  [(match_operand:ILASX 2 "register_operand")
+	   (match_operand:ILASX 3 "register_operand")]))]
+  "ISA_HAS_LASX"
+{
+  bool ok = loongarch_expand_vec_cmp (operands);
+  gcc_assert (ok);
+  DONE;
+})
+
+(define_insn "lasx_xvfclass_<flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unspec:<VIMODE256> [(match_operand:FLASX 1 "register_operand" "f")]
+			    UNSPEC_LASX_XVFCLASS))]
+  "ISA_HAS_LASX"
+  "xvfclass.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fclass")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfcmp_caf_<flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unspec:<VIMODE256> [(match_operand:FLASX 1 "register_operand" "f")
+			     (match_operand:FLASX 2 "register_operand" "f")]
+			    UNSPEC_LASX_XVFCMP_CAF))]
+  "ISA_HAS_LASX"
+  "xvfcmp.caf.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfcmp_cune_<FLASX:flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unspec:<VIMODE256> [(match_operand:FLASX 1 "register_operand" "f")
+			     (match_operand:FLASX 2 "register_operand" "f")]
+			    UNSPEC_LASX_XVFCMP_CUNE))]
+  "ISA_HAS_LASX"
+  "xvfcmp.cune.<FLASX:flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+
+
+(define_int_iterator FSC256_UNS [UNSPEC_LASX_XVFCMP_SAF UNSPEC_LASX_XVFCMP_SUN
+				 UNSPEC_LASX_XVFCMP_SOR UNSPEC_LASX_XVFCMP_SEQ
+				 UNSPEC_LASX_XVFCMP_SNE UNSPEC_LASX_XVFCMP_SUEQ
+				 UNSPEC_LASX_XVFCMP_SUNE UNSPEC_LASX_XVFCMP_SULE
+				 UNSPEC_LASX_XVFCMP_SULT UNSPEC_LASX_XVFCMP_SLE
+				 UNSPEC_LASX_XVFCMP_SLT])
+
+(define_int_attr fsc256
+  [(UNSPEC_LASX_XVFCMP_SAF  "saf")
+   (UNSPEC_LASX_XVFCMP_SUN  "sun")
+   (UNSPEC_LASX_XVFCMP_SOR  "sor")
+   (UNSPEC_LASX_XVFCMP_SEQ  "seq")
+   (UNSPEC_LASX_XVFCMP_SNE  "sne")
+   (UNSPEC_LASX_XVFCMP_SUEQ "sueq")
+   (UNSPEC_LASX_XVFCMP_SUNE "sune")
+   (UNSPEC_LASX_XVFCMP_SULE "sule")
+   (UNSPEC_LASX_XVFCMP_SULT "sult")
+   (UNSPEC_LASX_XVFCMP_SLE  "sle")
+   (UNSPEC_LASX_XVFCMP_SLT  "slt")])
+
+(define_insn "lasx_xvfcmp_<vfcond:fcc>_<FLASX:flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(vfcond:<VIMODE256> (match_operand:FLASX 1 "register_operand" "f")
+			    (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfcmp.<vfcond:fcc>.<FLASX:flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "lasx_xvfcmp_<fsc256>_<FLASX:flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unspec:<VIMODE256> [(match_operand:FLASX 1 "register_operand" "f")
+			     (match_operand:FLASX 2 "register_operand" "f")]
+			    FSC256_UNS))]
+  "ISA_HAS_LASX"
+  "xvfcmp.<fsc256>.<FLASX:flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fcmp")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_mode_attr fint256
+  [(V8SF "v8si")
+   (V4DF "v4di")])
+
+(define_mode_attr FINTCNV256
+  [(V8SF "I2S")
+   (V4DF "I2D")])
+
+(define_mode_attr FINTCNV256_2
+  [(V8SF "S2I")
+   (V4DF "D2I")])
+
+(define_insn "float<fint256><FLASX:mode>2"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(float:FLASX (match_operand:<VIMODE256> 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvffint.<flasxfmt>.<ilasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV256>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "floatuns<fint256><FLASX:mode>2"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(unsigned_float:FLASX
+	  (match_operand:<VIMODE256> 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvffint.<flasxfmt>.<ilasxfmt_u>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV256>")
+   (set_attr "mode" "<MODE>")])
+
+(define_mode_attr FFQ256
+  [(V4SF "V16HI")
+   (V2DF "V8SI")])
+
+(define_insn "lasx_xvreplgr2vr_<lasxfmt_f>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(vec_duplicate:ILASX
+	  (match_operand:<UNITMODE> 1 "reg_or_0_operand" "r,J")))]
+  "ISA_HAS_LASX"
+{
+  if (which_alternative == 1)
+    return "xvldi.b\t%u0,0" ;
+
+  if (!TARGET_64BIT && (<MODE>mode == V2DImode || <MODE>mode == V2DFmode))
+    return "#";
+  else
+    return "xvreplgr2vr.<lasxfmt>\t%u0,%z1";
+}
+  [(set_attr "type" "simd_fill")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "8")])
+
+(define_insn "logb<mode>2"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(unspec:FLASX [(match_operand:FLASX 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVFLOGB))]
+  "ISA_HAS_LASX"
+  "xvflogb.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_flog2")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "smax<mode>3"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(smax:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		    (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfmax.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfmaxa_<flasxfmt>"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(if_then_else:FLASX
+	   (gt (abs:FLASX (match_operand:FLASX 1 "register_operand" "f"))
+	       (abs:FLASX (match_operand:FLASX 2 "register_operand" "f")))
+	   (match_dup 1)
+	   (match_dup 2)))]
+  "ISA_HAS_LASX"
+  "xvfmaxa.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "smin<mode>3"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(smin:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		    (match_operand:FLASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfmin.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfmina_<flasxfmt>"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(if_then_else:FLASX
+	   (lt (abs:FLASX (match_operand:FLASX 1 "register_operand" "f"))
+	       (abs:FLASX (match_operand:FLASX 2 "register_operand" "f")))
+	   (match_dup 1)
+	   (match_dup 2)))]
+  "ISA_HAS_LASX"
+  "xvfmina.<flasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fminmax")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfrecip_<flasxfmt>"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(unspec:FLASX [(match_operand:FLASX 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVFRECIP))]
+  "ISA_HAS_LASX"
+  "xvfrecip.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfrint_<flasxfmt>"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(unspec:FLASX [(match_operand:FLASX 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVFRINT))]
+  "ISA_HAS_LASX"
+  "xvfrint.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfrsqrt_<flasxfmt>"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(unspec:FLASX [(match_operand:FLASX 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVFRSQRT))]
+  "ISA_HAS_LASX"
+  "xvfrsqrt.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fdiv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvftint_s_<ilasxfmt>_<flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unspec:<VIMODE256> [(match_operand:FLASX 1 "register_operand" "f")]
+			    UNSPEC_LASX_XVFTINT_S))]
+  "ISA_HAS_LASX"
+  "xvftint.<ilasxfmt>.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV256_2>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvftint_u_<ilasxfmt_u>_<flasxfmt>"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unspec:<VIMODE256> [(match_operand:FLASX 1 "register_operand" "f")]
+			    UNSPEC_LASX_XVFTINT_U))]
+  "ISA_HAS_LASX"
+  "xvftint.<ilasxfmt_u>.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV256_2>")
+   (set_attr "mode" "<MODE>")])
+
+
+
+(define_insn "fix_trunc<FLASX:mode><mode256_i>2"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(fix:<VIMODE256> (match_operand:FLASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvftintrz.<ilasxfmt>.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV256_2>")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "fixuns_trunc<FLASX:mode><mode256_i>2"
+  [(set (match_operand:<VIMODE256> 0 "register_operand" "=f")
+	(unsigned_fix:<VIMODE256> (match_operand:FLASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvftintrz.<ilasxfmt_u>.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "cnv_mode" "<FINTCNV256_2>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvh<optab>w_h<u>_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(addsub:V16HI
+	  (any_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 1 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)
+			 (const_int 17) (const_int 19)
+			 (const_int 21) (const_int 23)
+			 (const_int 25) (const_int 27)
+			 (const_int 29) (const_int 31)])))
+	  (any_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)
+			 (const_int 16) (const_int 18)
+			 (const_int 20) (const_int 22)
+			 (const_int 24) (const_int 26)
+			 (const_int 28) (const_int 30)])))))]
+  "ISA_HAS_LASX"
+  "xvh<optab>w.h<u>.b<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvh<optab>w_w<u>_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(addsub:V8SI
+	  (any_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 1 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))
+	  (any_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))))]
+  "ISA_HAS_LASX"
+  "xvh<optab>w.w<u>.h<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvh<optab>w_d<u>_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(addsub:V4DI
+	  (any_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 1 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))
+	  (any_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LASX"
+  "xvh<optab>w.d<u>.w<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvpackev_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_select:V32QI
+	  (vec_concat:V64QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (match_operand:V32QI 2 "register_operand" "f"))
+	  (parallel [(const_int 0)  (const_int 32)
+		     (const_int 2)  (const_int 34)
+		     (const_int 4)  (const_int 36)
+		     (const_int 6)  (const_int 38)
+		     (const_int 8)  (const_int 40)
+		     (const_int 10)  (const_int 42)
+		     (const_int 12)  (const_int 44)
+		     (const_int 14)  (const_int 46)
+		     (const_int 16)  (const_int 48)
+		     (const_int 18)  (const_int 50)
+		     (const_int 20)  (const_int 52)
+		     (const_int 22)  (const_int 54)
+		     (const_int 24)  (const_int 56)
+		     (const_int 26)  (const_int 58)
+		     (const_int 28)  (const_int 60)
+		     (const_int 30)  (const_int 62)])))]
+  "ISA_HAS_LASX"
+  "xvpackev.b\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V32QI")])
+
+
+(define_insn "lasx_xvpackev_h"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_select:V16HI
+	  (vec_concat:V32HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (match_operand:V16HI 2 "register_operand" "f"))
+	  (parallel [(const_int 0)  (const_int 16)
+		     (const_int 2)  (const_int 18)
+		     (const_int 4)  (const_int 20)
+		     (const_int 6)  (const_int 22)
+		     (const_int 8)  (const_int 24)
+		     (const_int 10) (const_int 26)
+		     (const_int 12) (const_int 28)
+		     (const_int 14) (const_int 30)])))]
+  "ISA_HAS_LASX"
+  "xvpackev.h\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvpackev_w"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_select:V8SI
+	  (vec_concat:V16SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (match_operand:V8SI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 8)
+		     (const_int 2) (const_int 10)
+		     (const_int 4) (const_int 12)
+		     (const_int 6) (const_int 14)])))]
+  "ISA_HAS_LASX"
+  "xvpackev.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvpackev_w_f"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_select:V8SF
+	  (vec_concat:V16SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_operand:V8SF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 8)
+		     (const_int 2) (const_int 10)
+		     (const_int 4) (const_int 12)
+		     (const_int 6) (const_int 14)])))]
+  "ISA_HAS_LASX"
+  "xvpackev.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvilvh_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_select:V32QI
+	  (vec_concat:V64QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (match_operand:V32QI 2 "register_operand" "f"))
+	  (parallel [(const_int 8) (const_int 40)
+		     (const_int 9) (const_int 41)
+		     (const_int 10) (const_int 42)
+		     (const_int 11) (const_int 43)
+		     (const_int 12) (const_int 44)
+		     (const_int 13) (const_int 45)
+		     (const_int 14) (const_int 46)
+		     (const_int 15) (const_int 47)
+		     (const_int 24) (const_int 56)
+		     (const_int 25) (const_int 57)
+		     (const_int 26) (const_int 58)
+		     (const_int 27) (const_int 59)
+		     (const_int 28) (const_int 60)
+		     (const_int 29) (const_int 61)
+		     (const_int 30) (const_int 62)
+		     (const_int 31) (const_int 63)])))]
+  "ISA_HAS_LASX"
+  "xvilvh.b\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvilvh_h"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_select:V16HI
+	  (vec_concat:V32HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (match_operand:V16HI 2 "register_operand" "f"))
+	  (parallel [(const_int 4) (const_int 20)
+		     (const_int 5) (const_int 21)
+		     (const_int 6) (const_int 22)
+		     (const_int 7) (const_int 23)
+		     (const_int 12) (const_int 28)
+		     (const_int 13) (const_int 29)
+		     (const_int 14) (const_int 30)
+		     (const_int 15) (const_int 31)])))]
+  "ISA_HAS_LASX"
+  "xvilvh.h\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvilvh_w"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_select:V8SI
+	  (vec_concat:V16SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (match_operand:V8SI 2 "register_operand" "f"))
+	  (parallel [(const_int 2) (const_int 10)
+		     (const_int 3) (const_int 11)
+		     (const_int 6) (const_int 14)
+		     (const_int 7) (const_int 15)])))]
+  "ISA_HAS_LASX"
+  "xvilvh.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvilvh_w_f"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_select:V8SF
+	  (vec_concat:V16SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_operand:V8SF 2 "register_operand" "f"))
+	  (parallel [(const_int 2) (const_int 10)
+		     (const_int 3) (const_int 11)
+		     (const_int 6) (const_int 14)
+		     (const_int 7) (const_int 15)])))]
+  "ISA_HAS_LASX"
+  "xvilvh.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SF")])
+
+
+(define_insn "lasx_xvilvh_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(vec_select:V4DI
+	  (vec_concat:V8DI
+	    (match_operand:V4DI 1 "register_operand" "f")
+	    (match_operand:V4DI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 5)
+		     (const_int 3) (const_int 7)])))]
+  "ISA_HAS_LASX"
+  "xvilvh.d\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V2DI")])
+
+(define_insn "lasx_xvilvh_d_f"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(vec_select:V4DF
+	  (vec_concat:V8DF
+	    (match_operand:V4DF 1 "register_operand" "f")
+	    (match_operand:V4DF 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 5)
+		     (const_int 3) (const_int 7)])))]
+  "ISA_HAS_LASX"
+  "xvilvh.d\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvpackod_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_select:V32QI
+	  (vec_concat:V64QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (match_operand:V32QI 2 "register_operand" "f"))
+	  (parallel [(const_int 1)  (const_int 33)
+		     (const_int 3)  (const_int 35)
+		     (const_int 5)  (const_int 37)
+		     (const_int 7)  (const_int 39)
+		     (const_int 9)  (const_int 41)
+		     (const_int 11)  (const_int 43)
+		     (const_int 13)  (const_int 45)
+		     (const_int 15)  (const_int 47)
+		     (const_int 17)  (const_int 49)
+		     (const_int 19)  (const_int 51)
+		     (const_int 21)  (const_int 53)
+		     (const_int 23)  (const_int 55)
+		     (const_int 25)  (const_int 57)
+		     (const_int 27)  (const_int 59)
+		     (const_int 29)  (const_int 61)
+		     (const_int 31)  (const_int 63)])))]
+  "ISA_HAS_LASX"
+  "xvpackod.b\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V32QI")])
+
+
+(define_insn "lasx_xvpackod_h"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_select:V16HI
+	  (vec_concat:V32HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (match_operand:V16HI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 17)
+		     (const_int 3) (const_int 19)
+		     (const_int 5) (const_int 21)
+		     (const_int 7) (const_int 23)
+		     (const_int 9) (const_int 25)
+		     (const_int 11) (const_int 27)
+		     (const_int 13) (const_int 29)
+		     (const_int 15) (const_int 31)])))]
+  "ISA_HAS_LASX"
+  "xvpackod.h\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16HI")])
+
+
+(define_insn "lasx_xvpackod_w"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_select:V8SI
+	  (vec_concat:V16SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (match_operand:V8SI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 9)
+		     (const_int 3) (const_int 11)
+		     (const_int 5) (const_int 13)
+		     (const_int 7) (const_int 15)])))]
+  "ISA_HAS_LASX"
+  "xvpackod.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SI")])
+
+
+(define_insn "lasx_xvpackod_w_f"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_select:V8SF
+	  (vec_concat:V16SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_operand:V8SF 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 9)
+		     (const_int 3) (const_int 11)
+		     (const_int 5) (const_int 13)
+		     (const_int 7) (const_int 15)])))]
+  "ISA_HAS_LASX"
+  "xvpackod.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvilvl_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_select:V32QI
+	  (vec_concat:V64QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (match_operand:V32QI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 32)
+		     (const_int 1) (const_int 33)
+		     (const_int 2) (const_int 34)
+		     (const_int 3) (const_int 35)
+		     (const_int 4) (const_int 36)
+		     (const_int 5) (const_int 37)
+		     (const_int 6) (const_int 38)
+		     (const_int 7) (const_int 39)
+		     (const_int 16) (const_int 48)
+		     (const_int 17) (const_int 49)
+		     (const_int 18) (const_int 50)
+		     (const_int 19) (const_int 51)
+		     (const_int 20) (const_int 52)
+		     (const_int 21) (const_int 53)
+		     (const_int 22) (const_int 54)
+		     (const_int 23) (const_int 55)])))]
+  "ISA_HAS_LASX"
+  "xvilvl.b\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvilvl_h"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_select:V16HI
+	  (vec_concat:V32HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (match_operand:V16HI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 16)
+		     (const_int 1) (const_int 17)
+		     (const_int 2) (const_int 18)
+		     (const_int 3) (const_int 19)
+		     (const_int 8) (const_int 24)
+		     (const_int 9) (const_int 25)
+		     (const_int 10) (const_int 26)
+		     (const_int 11) (const_int 27)])))]
+  "ISA_HAS_LASX"
+  "xvilvl.h\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvilvl_w"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_select:V8SI
+	  (vec_concat:V16SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (match_operand:V8SI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 8)
+		     (const_int 1) (const_int 9)
+		     (const_int 4) (const_int 12)
+		     (const_int 5) (const_int 13)])))]
+  "ISA_HAS_LASX"
+  "xvilvl.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvilvl_w_f"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_select:V8SF
+	  (vec_concat:V16SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_operand:V8SF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 8)
+		     (const_int 1) (const_int 9)
+		     (const_int 4) (const_int 12)
+		     (const_int 5) (const_int 13)])))]
+  "ISA_HAS_LASX"
+  "xvilvl.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvilvl_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(vec_select:V4DI
+	  (vec_concat:V8DI
+	    (match_operand:V4DI 1 "register_operand" "f")
+	    (match_operand:V4DI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 4)
+		     (const_int 2) (const_int 6)])))]
+  "ISA_HAS_LASX"
+  "xvilvl.d\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvilvl_d_f"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(vec_select:V4DF
+	  (vec_concat:V8DF
+	    (match_operand:V4DF 1 "register_operand" "f")
+	    (match_operand:V4DF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 4)
+		     (const_int 2) (const_int 6)])))]
+  "ISA_HAS_LASX"
+  "xvilvl.d\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "smax<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(smax:ILASX (match_operand:ILASX 1 "register_operand" "f,f")
+		    (match_operand:ILASX 2 "reg_or_vector_same_simm5_operand" "f,Usv5")))]
+  "ISA_HAS_LASX"
+  "@
+   xvmax.<lasxfmt>\t%u0,%u1,%u2
+   xvmaxi.<lasxfmt>\t%u0,%u1,%E2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "umax<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(umax:ILASX (match_operand:ILASX 1 "register_operand" "f,f")
+		    (match_operand:ILASX 2 "reg_or_vector_same_uimm5_operand" "f,Uuv5")))]
+  "ISA_HAS_LASX"
+  "@
+   xvmax.<lasxfmt_u>\t%u0,%u1,%u2
+   xvmaxi.<lasxfmt_u>\t%u0,%u1,%B2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "smin<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(smin:ILASX (match_operand:ILASX 1 "register_operand" "f,f")
+		    (match_operand:ILASX 2 "reg_or_vector_same_simm5_operand" "f,Usv5")))]
+  "ISA_HAS_LASX"
+  "@
+   xvmin.<lasxfmt>\t%u0,%u1,%u2
+   xvmini.<lasxfmt>\t%u0,%u1,%E2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "umin<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(umin:ILASX (match_operand:ILASX 1 "register_operand" "f,f")
+		    (match_operand:ILASX 2 "reg_or_vector_same_uimm5_operand" "f,Uuv5")))]
+  "ISA_HAS_LASX"
+  "@
+   xvmin.<lasxfmt_u>\t%u0,%u1,%u2
+   xvmini.<lasxfmt_u>\t%u0,%u1,%B2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvclo_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(clz:ILASX (not:ILASX (match_operand:ILASX 1 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvclo.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "clz<mode>2"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(clz:ILASX (match_operand:ILASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvclz.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvnor_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f,f")
+	(and:ILASX (not:ILASX (match_operand:ILASX 1 "register_operand" "f,f"))
+		   (not:ILASX (match_operand:ILASX 2 "reg_or_vector_same_val_operand" "f,Urv8"))))]
+  "ISA_HAS_LASX"
+  "@
+   xvnor.v\t%u0,%u1,%u2
+   xvnori.b\t%u0,%u1,%B2"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvpickev_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_select:V32QI
+	  (vec_concat:V64QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (match_operand:V32QI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 2)
+		     (const_int 4) (const_int 6)
+		     (const_int 8) (const_int 10)
+		     (const_int 12) (const_int 14)
+		     (const_int 32) (const_int 34)
+		     (const_int 36) (const_int 38)
+		     (const_int 40) (const_int 42)
+		     (const_int 44) (const_int 46)
+		     (const_int 16) (const_int 18)
+		     (const_int 20) (const_int 22)
+		     (const_int 24) (const_int 26)
+		     (const_int 28) (const_int 30)
+		     (const_int 48) (const_int 50)
+		     (const_int 52) (const_int 54)
+		     (const_int 56) (const_int 58)
+		     (const_int 60) (const_int 62)])))]
+  "ISA_HAS_LASX"
+  "xvpickev.b\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvpickev_h"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_select:V16HI
+	  (vec_concat:V32HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (match_operand:V16HI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 2)
+		     (const_int 4) (const_int 6)
+		     (const_int 16) (const_int 18)
+		     (const_int 20) (const_int 22)
+		     (const_int 8) (const_int 10)
+		     (const_int 12) (const_int 14)
+		     (const_int 24) (const_int 26)
+		     (const_int 28) (const_int 30)])))]
+  "ISA_HAS_LASX"
+  "xvpickev.h\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvpickev_w"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_select:V8SI
+	  (vec_concat:V16SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (match_operand:V8SI 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 2)
+		     (const_int 8) (const_int 10)
+		     (const_int 4) (const_int 6)
+		     (const_int 12) (const_int 14)])))]
+  "ISA_HAS_LASX"
+  "xvpickev.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvpickev_w_f"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_select:V8SF
+	  (vec_concat:V16SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_operand:V8SF 2 "register_operand" "f"))
+	  (parallel [(const_int 0) (const_int 2)
+		     (const_int 8) (const_int 10)
+		     (const_int 4) (const_int 6)
+		     (const_int 12) (const_int 14)])))]
+  "ISA_HAS_LASX"
+  "xvpickev.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvpickod_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_select:V32QI
+	  (vec_concat:V64QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (match_operand:V32QI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 3)
+		     (const_int 5) (const_int 7)
+		     (const_int 9) (const_int 11)
+		     (const_int 13) (const_int 15)
+		     (const_int 33) (const_int 35)
+		     (const_int 37) (const_int 39)
+		     (const_int 41) (const_int 43)
+		     (const_int 45) (const_int 47)
+		     (const_int 17) (const_int 19)
+		     (const_int 21) (const_int 23)
+		     (const_int 25) (const_int 27)
+		     (const_int 29) (const_int 31)
+		     (const_int 49) (const_int 51)
+		     (const_int 53) (const_int 55)
+		     (const_int 57) (const_int 59)
+		     (const_int 61) (const_int 63)])))]
+  "ISA_HAS_LASX"
+  "xvpickod.b\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvpickod_h"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_select:V16HI
+	  (vec_concat:V32HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (match_operand:V16HI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 3)
+		     (const_int 5) (const_int 7)
+		     (const_int 17) (const_int 19)
+		     (const_int 21) (const_int 23)
+		     (const_int 9) (const_int 11)
+		     (const_int 13) (const_int 15)
+		     (const_int 25) (const_int 27)
+		     (const_int 29) (const_int 31)])))]
+  "ISA_HAS_LASX"
+  "xvpickod.h\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvpickod_w"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_select:V8SI
+	  (vec_concat:V16SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (match_operand:V8SI 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 3)
+		     (const_int 9) (const_int 11)
+		     (const_int 5) (const_int 7)
+		     (const_int 13) (const_int 15)])))]
+  "ISA_HAS_LASX"
+  "xvpickod.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvpickod_w_f"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_select:V8SF
+	  (vec_concat:V16SF
+	    (match_operand:V8SF 1 "register_operand" "f")
+	    (match_operand:V8SF 2 "register_operand" "f"))
+	  (parallel [(const_int 1) (const_int 3)
+		     (const_int 9) (const_int 11)
+		     (const_int 5) (const_int 7)
+		     (const_int 13) (const_int 15)])))]
+  "ISA_HAS_LASX"
+  "xvpickod.w\t%u0,%u2,%u1"
+  [(set_attr "type" "simd_permute")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "popcount<mode>2"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(popcount:ILASX (match_operand:ILASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvpcnt.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_pcnt")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "lasx_xvsat_s_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		      (match_operand 2 "const_<bitimm256>_operand" "")]
+		     UNSPEC_LASX_XVSAT_S))]
+  "ISA_HAS_LASX"
+  "xvsat.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_sat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsat_u_<lasxfmt_u>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")]
+		      UNSPEC_LASX_XVSAT_U))]
+  "ISA_HAS_LASX"
+  "xvsat.<lasxfmt_u>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_sat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvshuf4i_<lasxfmt_f>"
+  [(set (match_operand:LASX_WHB_W 0 "register_operand" "=f")
+	(unspec:LASX_WHB_W [(match_operand:LASX_WHB_W 1 "register_operand" "f")
+			    (match_operand 2 "const_uimm8_operand")]
+			   UNSPEC_LASX_XVSHUF4I))]
+  "ISA_HAS_LASX"
+  "xvshuf4i.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvshuf4i_<lasxfmt_f>_1"
+  [(set (match_operand:LASX_W 0 "register_operand" "=f")
+    (vec_select:LASX_W
+      (match_operand:LASX_W 1 "nonimmediate_operand" "f")
+      (parallel [(match_operand 2 "const_0_to_3_operand")
+             (match_operand 3 "const_0_to_3_operand")
+             (match_operand 4 "const_0_to_3_operand")
+             (match_operand 5 "const_0_to_3_operand")
+             (match_operand 6 "const_4_to_7_operand")
+             (match_operand 7 "const_4_to_7_operand")
+             (match_operand 8 "const_4_to_7_operand")
+             (match_operand 9 "const_4_to_7_operand")])))]
+  "ISA_HAS_LASX
+   && INTVAL (operands[2]) + 4 == INTVAL (operands[6])
+   && INTVAL (operands[3]) + 4 == INTVAL (operands[7])
+   && INTVAL (operands[4]) + 4 == INTVAL (operands[8])
+   && INTVAL (operands[5]) + 4 == INTVAL (operands[9])"
+{
+  int mask = 0;
+  mask |= INTVAL (operands[2]) << 0;
+  mask |= INTVAL (operands[3]) << 2;
+  mask |= INTVAL (operands[4]) << 4;
+  mask |= INTVAL (operands[5]) << 6;
+  operands[2] = GEN_INT (mask);
+
+  return "xvshuf4i.w\t%u0,%u1,%2";
+}
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrar_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVSRAR))]
+  "ISA_HAS_LASX"
+  "xvsrar.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrari_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")]
+		      UNSPEC_LASX_XVSRARI))]
+  "ISA_HAS_LASX"
+  "xvsrari.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrlr_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVSRLR))]
+  "ISA_HAS_LASX"
+  "xvsrlr.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrlri_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")]
+		      UNSPEC_LASX_XVSRLRI))]
+  "ISA_HAS_LASX"
+  "xvsrlri.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssub_s_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(ss_minus:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvssub.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssub_u_<lasxfmt_u>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(us_minus:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvssub.<lasxfmt_u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvshuf_<lasxfmt_f>"
+  [(set (match_operand:LASX_DWH 0 "register_operand" "=f")
+	(unspec:LASX_DWH [(match_operand:LASX_DWH 1 "register_operand" "0")
+			  (match_operand:LASX_DWH 2 "register_operand" "f")
+			  (match_operand:LASX_DWH 3 "register_operand" "f")]
+			UNSPEC_LASX_XVSHUF))]
+  "ISA_HAS_LASX"
+  "xvshuf.<lasxfmt>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_sld")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvshuf_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(unspec:V32QI [(match_operand:V32QI 1 "register_operand" "f")
+		       (match_operand:V32QI 2 "register_operand" "f")
+		       (match_operand:V32QI 3 "register_operand" "f")]
+		      UNSPEC_LASX_XVSHUF_B))]
+  "ISA_HAS_LASX"
+  "xvshuf.b\t%u0,%u1,%u2,%u3"
+  [(set_attr "type" "simd_sld")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvreplve0_<lasxfmt_f>"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(vec_duplicate:LASX
+	  (vec_select:<UNITMODE>
+	    (match_operand:LASX 1 "register_operand" "f")
+	    (parallel [(const_int 0)]))))]
+  "ISA_HAS_LASX"
+  "xvreplve0.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvrepl128vei_b_internal"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(vec_duplicate:V32QI
+	  (vec_select:V32QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_uimm4_operand" "")
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_operand 3 "const_16_to_31_operand" "")
+		       (match_dup 3) (match_dup 3) (match_dup 3)
+		       (match_dup 3) (match_dup 3) (match_dup 3)
+		       (match_dup 3) (match_dup 3) (match_dup 3)
+		       (match_dup 3) (match_dup 3) (match_dup 3)
+		       (match_dup 3) (match_dup 3) (match_dup 3)]))))]
+  "ISA_HAS_LASX && ((INTVAL (operands[3]) - INTVAL (operands[2])) == 16)"
+  "xvrepl128vei.b\t%u0,%u1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvrepl128vei_h_internal"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(vec_duplicate:V16HI
+	  (vec_select:V16HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_uimm3_operand" "")
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_dup 2)
+		       (match_operand 3 "const_8_to_15_operand" "")
+		       (match_dup 3) (match_dup 3) (match_dup 3)
+		       (match_dup 3) (match_dup 3) (match_dup 3)
+		       (match_dup 3)]))))]
+  "ISA_HAS_LASX && ((INTVAL (operands[3]) - INTVAL (operands[2])) == 8)"
+  "xvrepl128vei.h\t%u0,%u1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvrepl128vei_w_internal"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(vec_duplicate:V8SI
+	  (vec_select:V8SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_0_to_3_operand" "")
+		       (match_dup 2) (match_dup 2) (match_dup 2)
+		       (match_operand 3 "const_4_to_7_operand" "")
+		       (match_dup 3) (match_dup 3) (match_dup 3)]))))]
+  "ISA_HAS_LASX && ((INTVAL (operands[3]) - INTVAL (operands[2])) == 4)"
+  "xvrepl128vei.w\t%u0,%u1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvrepl128vei_d_internal"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(vec_duplicate:V4DI
+	  (vec_select:V4DI
+	    (match_operand:V4DI 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_0_or_1_operand" "")
+		       (match_dup 2)
+		       (match_operand 3 "const_2_or_3_operand" "")
+		       (match_dup 3)]))))]
+  "ISA_HAS_LASX && ((INTVAL (operands[3]) - INTVAL (operands[2])) == 2)"
+  "xvrepl128vei.d\t%u0,%u1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvrepl128vei_<lasxfmt_f>"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(unspec:LASX [(match_operand:LASX 1 "register_operand" "f")
+		      (match_operand 2 "const_<indeximm_lo>_operand" "")]
+		     UNSPEC_LASX_XVREPL128VEI))]
+  "ISA_HAS_LASX"
+  "xvrepl128vei.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvreplve0_<lasxfmt_f>_scalar"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+    (vec_duplicate:FLASX
+      (match_operand:<UNITMODE> 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvreplve0.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvreplve0_q"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(unspec:V32QI [(match_operand:V32QI 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVREPLVE0_Q))]
+  "ISA_HAS_LASX"
+  "xvreplve0.q\t%u0,%u1"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvfcvt_h_s"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(unspec:V16HI [(match_operand:V8SF 1 "register_operand" "f")
+		       (match_operand:V8SF 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVFCVT))]
+  "ISA_HAS_LASX"
+  "xvfcvt.h.s\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvfcvt_s_d"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V4DF 1 "register_operand" "f")
+		      (match_operand:V4DF 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFCVT))]
+  "ISA_HAS_LASX"
+  "xvfcvt.s.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "vec_pack_trunc_v4df"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(vec_concat:V8SF
+	  (float_truncate:V4SF (match_operand:V4DF 1 "register_operand" "f"))
+	  (float_truncate:V4SF (match_operand:V4DF 2 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvfcvt.s.d\t%u0,%u2,%u1\n\txvpermi.d\t%u0,%u0,0xd8"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8SF")
+   (set_attr "length" "8")])
+
+;; Define for builtin function.
+(define_insn "lasx_xvfcvth_s_h"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V16HI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFCVTH))]
+  "ISA_HAS_LASX"
+  "xvfcvth.s.h\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8SF")])
+
+;; Define for builtin function.
+(define_insn "lasx_xvfcvth_d_s"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(float_extend:V4DF
+	 (vec_select:V4SF
+	  (match_operand:V8SF 1 "register_operand" "f")
+	  (parallel [(const_int 2) (const_int 3)
+		      (const_int 6) (const_int 7)]))))]
+  "ISA_HAS_LASX"
+  "xvfcvth.d.s\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DF")
+   (set_attr "length" "12")])
+
+;; Define for gen insn.
+(define_insn "lasx_xvfcvth_d_insn"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(float_extend:V4DF
+	(vec_select:V4SF
+	  (match_operand:V8SF 1 "register_operand" "f")
+	  (parallel [(const_int 4) (const_int 5)
+		     (const_int 6) (const_int 7)]))))]
+  "ISA_HAS_LASX"
+  "xvpermi.d\t%u0,%u1,0xfa\n\txvfcvtl.d.s\t%u0,%u0"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DF")
+   (set_attr "length" "12")])
+
+;; Define for builtin function.
+(define_insn "lasx_xvfcvtl_s_h"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V16HI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFCVTL))]
+  "ISA_HAS_LASX"
+  "xvfcvtl.s.h\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8SF")])
+
+;; Define for builtin function.
+(define_insn "lasx_xvfcvtl_d_s"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(float_extend:V4DF
+	(vec_select:V4SF
+	  (match_operand:V8SF 1 "register_operand" "f")
+	  (parallel [(const_int 0) (const_int 1)
+		     (const_int 4) (const_int 5)]))))]
+  "ISA_HAS_LASX"
+  "xvfcvtl.d.s\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DF")
+   (set_attr "length" "8")])
+
+;; Define for gen insn.
+(define_insn "lasx_xvfcvtl_d_insn"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(float_extend:V4DF
+	(vec_select:V4SF
+	  (match_operand:V8SF 1 "register_operand" "f")
+	  (parallel [(const_int 0) (const_int 1)
+		     (const_int 2) (const_int 3)]))))]
+  "ISA_HAS_LASX"
+  "xvpermi.d\t%u0,%u1,0x50\n\txvfcvtl.d.s\t%u0,%u0"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DF")
+   (set_attr "length" "8")])
+
+(define_code_attr lasxbr
+  [(eq "xbz")
+   (ne "xbnz")])
+
+(define_code_attr lasxeq_v
+  [(eq "eqz")
+   (ne "nez")])
+
+(define_code_attr lasxne_v
+  [(eq "nez")
+   (ne "eqz")])
+
+(define_code_attr lasxeq
+  [(eq "anyeqz")
+   (ne "allnez")])
+
+(define_code_attr lasxne
+  [(eq "allnez")
+   (ne "anyeqz")])
+
+(define_insn "lasx_<lasxbr>_<lasxfmt_f>"
+  [(set (pc)
+	(if_then_else
+	  (equality_op
+	    (unspec:SI [(match_operand:LASX 1 "register_operand" "f")]
+		       UNSPEC_LASX_BRANCH)
+	    (match_operand:SI 2 "const_0_operand"))
+	  (label_ref (match_operand 0))
+	  (pc)))
+   (clobber (match_scratch:FCC 3 "=z"))]
+  "ISA_HAS_LASX"
+{
+  return loongarch_output_conditional_branch (insn, operands,
+					 "xvset<lasxeq>.<lasxfmt>\t%Z3%u1\n\tbcnez\t%Z3%0",
+					 "xvset<lasxne>.<lasxfmt>\t%z3%u1\n\tbcnez\t%Z3%0");
+}
+  [(set_attr "type" "simd_branch")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_<lasxbr>_v_<lasxfmt_f>"
+  [(set (pc)
+	(if_then_else
+	  (equality_op
+	    (unspec:SI [(match_operand:LASX 1 "register_operand" "f")]
+		       UNSPEC_LASX_BRANCH_V)
+	    (match_operand:SI 2 "const_0_operand"))
+	  (label_ref (match_operand 0))
+	  (pc)))
+   (clobber (match_scratch:FCC 3 "=z"))]
+  "ISA_HAS_LASX"
+{
+  return loongarch_output_conditional_branch (insn, operands,
+					 "xvset<lasxeq_v>.v\t%Z3%u1\n\tbcnez\t%Z3%0",
+					 "xvset<lasxne_v>.v\t%Z3%u1\n\tbcnez\t%Z3%0");
+}
+  [(set_attr "type" "simd_branch")
+   (set_attr "mode" "<MODE>")])
+
+;; loongson-asx.
+(define_insn "lasx_vext2xv_h<u>_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(any_extend:V16HI
+	  (vec_select:V16QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (parallel [(const_int 0) (const_int 1)
+		       (const_int 2) (const_int 3)
+		       (const_int 4) (const_int 5)
+		       (const_int 6) (const_int 7)
+		       (const_int 8) (const_int 9)
+		       (const_int 10) (const_int 11)
+		       (const_int 12) (const_int 13)
+		       (const_int 14) (const_int 15)]))))]
+  "ISA_HAS_LASX"
+  "vext2xv.h<u>.b<u>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_vext2xv_w<u>_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(any_extend:V8SI
+	  (vec_select:V8HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (parallel [(const_int 0) (const_int 1)
+		       (const_int 2) (const_int 3)
+		       (const_int 4) (const_int 5)
+		       (const_int 6) (const_int 7)]))))]
+  "ISA_HAS_LASX"
+  "vext2xv.w<u>.h<u>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_vext2xv_d<u>_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(any_extend:V4DI
+	  (vec_select:V4SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (parallel [(const_int 0) (const_int 1)
+		       (const_int 2) (const_int 3)]))))]
+  "ISA_HAS_LASX"
+  "vext2xv.d<u>.w<u>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_vext2xv_w<u>_b<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(any_extend:V8SI
+	  (vec_select:V8QI
+	   (match_operand:V32QI 1 "register_operand" "f")
+	    (parallel [(const_int 0) (const_int 1)
+		       (const_int 2) (const_int 3)
+		       (const_int 4) (const_int 5)
+		       (const_int 6) (const_int 7)]))))]
+  "ISA_HAS_LASX"
+  "vext2xv.w<u>.b<u>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_vext2xv_d<u>_h<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(any_extend:V4DI
+	  (vec_select:V4HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (parallel [(const_int 0) (const_int 1)
+		       (const_int 2) (const_int 3)]))))]
+  "ISA_HAS_LASX"
+  "vext2xv.d<u>.h<u>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_vext2xv_d<u>_b<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(any_extend:V4DI
+	  (vec_select:V4QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	    (parallel [(const_int 0) (const_int 1)
+		       (const_int 2) (const_int 3)]))))]
+  "ISA_HAS_LASX"
+  "vext2xv.d<u>.b<u>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DI")])
+
+;; Extend loongson-sx to loongson-asx.
+(define_insn "xvandn<mode>3"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(and:LASX (not:LASX (match_operand:LASX 1 "register_operand" "f"))
+			    (match_operand:LASX 2 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvandn.v\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "abs<mode>2"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(abs:ILASX (match_operand:ILASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvsigncov.<lasxfmt>\t%u0,%u1,%u1"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "neg<mode>2"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(neg:ILASX (match_operand:ILASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvneg.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvmuh_s_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVMUH_S))]
+  "ISA_HAS_LASX"
+  "xvmuh.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvmuh_u_<lasxfmt_u>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVMUH_U))]
+  "ISA_HAS_LASX"
+  "xvmuh.<lasxfmt_u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsllwil_s_<dlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VDMODE256> 0 "register_operand" "=f")
+	(unspec:<VDMODE256> [(match_operand:ILASX_WHB 1 "register_operand" "f")
+			     (match_operand 2 "const_<bitimm256>_operand" "")]
+			    UNSPEC_LASX_XVSLLWIL_S))]
+  "ISA_HAS_LASX"
+  "xvsllwil.<dlasxfmt>.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsllwil_u_<dlasxfmt_u>_<lasxfmt_u>"
+  [(set (match_operand:<VDMODE256> 0 "register_operand" "=f")
+	(unspec:<VDMODE256> [(match_operand:ILASX_WHB 1 "register_operand" "f")
+			     (match_operand 2 "const_<bitimm256>_operand" "")]
+			    UNSPEC_LASX_XVSLLWIL_U))]
+  "ISA_HAS_LASX"
+  "xvsllwil.<dlasxfmt_u>.<lasxfmt_u>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsran_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSRAN))]
+  "ISA_HAS_LASX"
+  "xvsran.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssran_s_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRAN_S))]
+  "ISA_HAS_LASX"
+  "xvssran.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssran_u_<hlasxfmt_u>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRAN_U))]
+  "ISA_HAS_LASX"
+  "xvssran.<hlasxfmt_u>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrarn_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSRARN))]
+  "ISA_HAS_LASX"
+  "xvsrarn.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrarn_s_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRARN_S))]
+  "ISA_HAS_LASX"
+  "xvssrarn.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrarn_u_<hlasxfmt_u>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRARN_U))]
+  "ISA_HAS_LASX"
+  "xvssrarn.<hlasxfmt_u>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrln_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSRLN))]
+  "ISA_HAS_LASX"
+  "xvsrln.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrln_u_<hlasxfmt_u>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRLN_U))]
+  "ISA_HAS_LASX"
+  "xvssrln.<hlasxfmt_u>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrlrn_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSRLRN))]
+  "ISA_HAS_LASX"
+  "xvsrlrn.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrlrn_u_<hlasxfmt_u>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRLRN_U))]
+  "ISA_HAS_LASX"
+  "xvssrlrn.<hlasxfmt_u>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfrstpi_<lasxfmt>"
+  [(set (match_operand:ILASX_HB 0 "register_operand" "=f")
+	(unspec:ILASX_HB [(match_operand:ILASX_HB 1 "register_operand" "0")
+			  (match_operand:ILASX_HB 2 "register_operand" "f")
+			  (match_operand 3 "const_uimm5_operand" "")]
+			 UNSPEC_LASX_XVFRSTPI))]
+  "ISA_HAS_LASX"
+  "xvfrstpi.<lasxfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvfrstp_<lasxfmt>"
+  [(set (match_operand:ILASX_HB 0 "register_operand" "=f")
+	(unspec:ILASX_HB [(match_operand:ILASX_HB 1 "register_operand" "0")
+			  (match_operand:ILASX_HB 2 "register_operand" "f")
+			  (match_operand:ILASX_HB 3 "register_operand" "f")]
+			 UNSPEC_LASX_XVFRSTP))]
+  "ISA_HAS_LASX"
+  "xvfrstp.<lasxfmt>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvshuf4i_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand 3 "const_uimm8_operand")]
+		     UNSPEC_LASX_XVSHUF4I))]
+  "ISA_HAS_LASX"
+  "xvshuf4i.d\t%u0,%u2,%3"
+  [(set_attr "type" "simd_sld")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvbsrl_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_uimm5_operand" "")]
+		      UNSPEC_LASX_XVBSRL_V))]
+  "ISA_HAS_LASX"
+  "xvbsrl.v\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvbsll_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_uimm5_operand" "")]
+		      UNSPEC_LASX_XVBSLL_V))]
+  "ISA_HAS_LASX"
+  "xvbsll.v\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvextrins_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVEXTRINS))]
+  "ISA_HAS_LASX"
+  "xvextrins.<lasxfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvmskltz_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVMSKLTZ))]
+  "ISA_HAS_LASX"
+  "xvmskltz.<lasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsigncov_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVSIGNCOV))]
+  "ISA_HAS_LASX"
+  "xvsigncov.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "copysign<mode>3"
+  [(set (match_dup 4)
+	(and:FLASX
+	  (not:FLASX (match_dup 3))
+	  (match_operand:FLASX 1 "register_operand")))
+   (set (match_dup 5)
+	(and:FLASX (match_dup 3)
+		   (match_operand:FLASX 2 "register_operand")))
+   (set (match_operand:FLASX 0 "register_operand")
+	(ior:FLASX (match_dup 4) (match_dup 5)))]
+  "ISA_HAS_LASX"
+{
+  operands[3] = loongarch_build_signbit_mask (<MODE>mode, 1, 0);
+
+  operands[4] = gen_reg_rtx (<MODE>mode);
+  operands[5] = gen_reg_rtx (<MODE>mode);
+})
+
+
+(define_insn "absv4df2"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(abs:V4DF (match_operand:V4DF 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvbitclri.d\t%u0,%u1,63"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "absv8sf2"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(abs:V8SF (match_operand:V8SF 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvbitclri.w\t%u0,%u1,31"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "negv4df2"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(neg:V4DF (match_operand:V4DF 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvbitrevi.d\t%u0,%u1,63"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "negv8sf2"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(neg:V8SF (match_operand:V8SF 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvbitrevi.w\t%u0,%u1,31"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "xvfmadd<mode>4"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(fma:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		   (match_operand:FLASX 2 "register_operand" "f")
+		   (match_operand:FLASX 3 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvfmadd.<flasxfmt>\t%u0,%u1,$u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "fms<mode>4"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(fma:FLASX (match_operand:FLASX 1 "register_operand" "f")
+		   (match_operand:FLASX 2 "register_operand" "f")
+		   (neg:FLASX (match_operand:FLASX 3 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvfmsub.<flasxfmt>\t%u0,%u1,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "xvfnmsub<mode>4_nmsub4"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(neg:FLASX
+	  (fma:FLASX
+	    (match_operand:FLASX 1 "register_operand" "f")
+	    (match_operand:FLASX 2 "register_operand" "f")
+	    (neg:FLASX (match_operand:FLASX 3 "register_operand" "f")))))]
+  "ISA_HAS_LASX"
+  "xvfnmsub.<flasxfmt>\t%u0,%u1,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+
+(define_insn "xvfnmadd<mode>4_nmadd4"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(neg:FLASX
+	  (fma:FLASX
+	    (match_operand:FLASX 1 "register_operand" "f")
+	    (match_operand:FLASX 2 "register_operand" "f")
+	    (match_operand:FLASX 3 "register_operand" "f"))))]
+  "ISA_HAS_LASX"
+  "xvfnmadd.<flasxfmt>\t%u0,%u1,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvftintrne_w_s"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRNE_W_S))]
+  "ISA_HAS_LASX"
+  "xvftintrne.w.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrne_l_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRNE_L_D))]
+  "ISA_HAS_LASX"
+  "xvftintrne.l.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftintrp_w_s"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRP_W_S))]
+  "ISA_HAS_LASX"
+  "xvftintrp.w.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrp_l_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRP_L_D))]
+  "ISA_HAS_LASX"
+  "xvftintrp.l.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftintrm_w_s"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRM_W_S))]
+  "ISA_HAS_LASX"
+  "xvftintrm.w.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrm_l_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRM_L_D))]
+  "ISA_HAS_LASX"
+  "xvftintrm.l.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftint_w_d"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V4DF 1 "register_operand" "f")
+		      (match_operand:V4DF 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINT_W_D))]
+  "ISA_HAS_LASX"
+  "xvftint.w.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvffint_s_l"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFFINT_S_L))]
+  "ISA_HAS_LASX"
+  "xvffint.s.l\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvftintrz_w_d"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V4DF 1 "register_operand" "f")
+		      (match_operand:V4DF 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRZ_W_D))]
+  "ISA_HAS_LASX"
+  "xvftintrz.w.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftintrp_w_d"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V4DF 1 "register_operand" "f")
+		      (match_operand:V4DF 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRP_W_D))]
+  "ISA_HAS_LASX"
+  "xvftintrp.w.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftintrm_w_d"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V4DF 1 "register_operand" "f")
+		      (match_operand:V4DF 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRM_W_D))]
+  "ISA_HAS_LASX"
+  "xvftintrm.w.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftintrne_w_d"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(unspec:V8SI [(match_operand:V4DF 1 "register_operand" "f")
+		      (match_operand:V4DF 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRNE_W_D))]
+  "ISA_HAS_LASX"
+  "xvftintrne.w.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvftinth_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTH_L_S))]
+  "ISA_HAS_LASX"
+  "xvftinth.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintl_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTL_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintl.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvffinth_d_w"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V8SI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFFINTH_D_W))]
+  "ISA_HAS_LASX"
+  "xvffinth.d.w\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvffintl_d_w"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V8SI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFFINTL_D_W))]
+  "ISA_HAS_LASX"
+  "xvffintl.d.w\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvftintrzh_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRZH_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrzh.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrzl_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRZL_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrzl.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lasx_xvftintrph_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRPH_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrph.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4SF")])
+
+(define_insn "lasx_xvftintrpl_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRPL_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrpl.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrmh_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRMH_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrmh.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrml_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRML_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrml.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrneh_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRNEH_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrneh.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvftintrnel_l_s"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFTINTRNEL_L_S))]
+  "ISA_HAS_LASX"
+  "xvftintrnel.l.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvfrintrne_s"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRNE_S))]
+  "ISA_HAS_LASX"
+  "xvfrintrne.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvfrintrne_d"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRNE_D))]
+  "ISA_HAS_LASX"
+  "xvfrintrne.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvfrintrz_s"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRZ_S))]
+  "ISA_HAS_LASX"
+  "xvfrintrz.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvfrintrz_d"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRZ_D))]
+  "ISA_HAS_LASX"
+  "xvfrintrz.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvfrintrp_s"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRP_S))]
+  "ISA_HAS_LASX"
+  "xvfrintrp.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvfrintrp_d"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRP_D))]
+  "ISA_HAS_LASX"
+  "xvfrintrp.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+(define_insn "lasx_xvfrintrm_s"
+  [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V8SF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRM_S))]
+  "ISA_HAS_LASX"
+  "xvfrintrm.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "lasx_xvfrintrm_d"
+  [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V4DF 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVFRINTRM_D))]
+  "ISA_HAS_LASX"
+  "xvfrintrm.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+;; Vector versions of the floating-point frint patterns.
+;; Expands to btrunc, ceil, floor, rint.
+(define_insn "<FRINT256_S:frint256_pattern_s>v8sf2"
+ [(set (match_operand:V8SF 0 "register_operand" "=f")
+	(unspec:V8SF [(match_operand:V8SF 1 "register_operand" "f")]
+			 FRINT256_S))]
+  "ISA_HAS_LASX"
+  "xvfrint<FRINT256_S:frint256_suffix>.s\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V8SF")])
+
+(define_insn "<FRINT256_D:frint256_pattern_d>v4df2"
+ [(set (match_operand:V4DF 0 "register_operand" "=f")
+	(unspec:V4DF [(match_operand:V4DF 1 "register_operand" "f")]
+			 FRINT256_D))]
+  "ISA_HAS_LASX"
+  "xvfrint<FRINT256_D:frint256_suffix>.d\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "V4DF")])
+
+;; Expands to round.
+(define_insn "round<mode>2"
+ [(set (match_operand:FLASX 0 "register_operand" "=f")
+	(unspec:FLASX [(match_operand:FLASX 1 "register_operand" "f")]
+			 UNSPEC_LASX_XVFRINT))]
+  "ISA_HAS_LASX"
+  "xvfrint.<flasxfmt>\t%u0,%u1"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+;; Offset load and broadcast
+(define_expand "lasx_xvldrepl_<lasxfmt_f>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand 2 "aq12<lasxfmt>_operand")
+   (match_operand 1 "pmode_register_operand")]
+  "ISA_HAS_LASX"
+{
+  emit_insn (gen_lasx_xvldrepl_<lasxfmt_f>_insn
+	     (operands[0], operands[1], operands[2]));
+  DONE;
+})
+
+(define_insn "lasx_xvldrepl_<lasxfmt_f>_insn"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(vec_duplicate:LASX
+	  (mem:<UNITMODE> (plus:DI (match_operand:DI 1 "register_operand" "r")
+				   (match_operand 2 "aq12<lasxfmt>_operand")))))]
+  "ISA_HAS_LASX"
+{
+  return "xvldrepl.<lasxfmt>\t%u0,%1,%2";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+;; Offset is "0"
+(define_insn "lasx_xvldrepl_<lasxfmt_f>_insn_0"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+    (vec_duplicate:LASX
+      (mem:<UNITMODE> (match_operand:DI 1 "register_operand" "r"))))]
+  "ISA_HAS_LASX"
+{
+    return "xvldrepl.<lasxfmt>\t%u0,%1,0";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+;;XVADDWEV.H.B   XVSUBWEV.H.B   XVMULWEV.H.B
+;;XVADDWEV.H.BU  XVSUBWEV.H.BU  XVMULWEV.H.BU
+(define_insn "lasx_xv<optab>wev_h_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(addsubmul:V16HI
+	  (any_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)
+			 (const_int 16) (const_int 18)
+			 (const_int 20) (const_int 22)
+			 (const_int 24) (const_int 26)
+			 (const_int 28) (const_int 30)])))
+	  (any_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)
+			 (const_int 16) (const_int 18)
+			 (const_int 20) (const_int 22)
+			 (const_int 24) (const_int 26)
+			 (const_int 28) (const_int 30)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wev.h.b<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V16HI")])
+
+;;XVADDWEV.W.H   XVSUBWEV.W.H   XVMULWEV.W.H
+;;XVADDWEV.W.HU  XVSUBWEV.W.HU  XVMULWEV.W.HU
+(define_insn "lasx_xv<optab>wev_w_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(addsubmul:V8SI
+	  (any_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))
+	  (any_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wev.w.h<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8SI")])
+
+;;XVADDWEV.D.W   XVSUBWEV.D.W   XVMULWEV.D.W
+;;XVADDWEV.D.WU  XVSUBWEV.D.WU  XVMULWEV.D.WU
+(define_insn "lasx_xv<optab>wev_d_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(addsubmul:V4DI
+	  (any_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))
+	  (any_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wev.d.w<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWEV.Q.D
+;;TODO2
+(define_insn "lasx_xvaddwev_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADDWEV))]
+  "ISA_HAS_LASX"
+  "xvaddwev.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVSUBWEV.Q.D
+;;TODO2
+(define_insn "lasx_xvsubwev_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVSUBWEV))]
+  "ISA_HAS_LASX"
+  "xvsubwev.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMULWEV.Q.D
+;;TODO2
+(define_insn "lasx_xvmulwev_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVMULWEV))]
+  "ISA_HAS_LASX"
+  "xvmulwev.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+
+;;XVADDWOD.H.B   XVSUBWOD.H.B   XVMULWOD.H.B
+;;XVADDWOD.H.BU  XVSUBWOD.H.BU  XVMULWOD.H.BU
+(define_insn "lasx_xv<optab>wod_h_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(addsubmul:V16HI
+	  (any_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)
+			 (const_int 17) (const_int 19)
+			 (const_int 21) (const_int 23)
+			 (const_int 25) (const_int 27)
+			 (const_int 29) (const_int 31)])))
+	  (any_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)
+			 (const_int 17) (const_int 19)
+			 (const_int 21) (const_int 23)
+			 (const_int 25) (const_int 27)
+			 (const_int 29) (const_int 31)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wod.h.b<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V16HI")])
+
+;;XVADDWOD.W.H   XVSUBWOD.W.H   XVMULWOD.W.H
+;;XVADDWOD.W.HU  XVSUBWOD.W.HU  XVMULWOD.W.HU
+(define_insn "lasx_xv<optab>wod_w_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(addsubmul:V8SI
+	  (any_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))
+	  (any_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wod.w.h<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8SI")])
+
+
+;;XVADDWOD.D.W   XVSUBWOD.D.W   XVMULWOD.D.W
+;;XVADDWOD.D.WU  XVSUBWOD.D.WU  XVMULWOD.D.WU
+(define_insn "lasx_xv<optab>wod_d_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(addsubmul:V4DI
+	  (any_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))
+	  (any_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wod.d.w<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWOD.Q.D
+;;TODO2
+(define_insn "lasx_xvaddwod_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADDWOD))]
+  "ISA_HAS_LASX"
+  "xvaddwod.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVSUBWOD.Q.D
+;;TODO2
+(define_insn "lasx_xvsubwod_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVSUBWOD))]
+  "ISA_HAS_LASX"
+  "xvsubwod.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMULWOD.Q.D
+;;TODO2
+(define_insn "lasx_xvmulwod_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVMULWOD))]
+  "ISA_HAS_LASX"
+  "xvmulwod.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWEV.Q.DU
+;;TODO2
+(define_insn "lasx_xvaddwev_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADDWEV2))]
+  "ISA_HAS_LASX"
+  "xvaddwev.q.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVSUBWEV.Q.DU
+;;TODO2
+(define_insn "lasx_xvsubwev_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVSUBWEV2))]
+  "ISA_HAS_LASX"
+  "xvsubwev.q.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMULWEV.Q.DU
+;;TODO2
+(define_insn "lasx_xvmulwev_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVMULWEV2))]
+  "ISA_HAS_LASX"
+  "xvmulwev.q.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWOD.Q.DU
+;;TODO2
+(define_insn "lasx_xvaddwod_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADDWOD2))]
+  "ISA_HAS_LASX"
+  "xvaddwod.q.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVSUBWOD.Q.DU
+;;TODO2
+(define_insn "lasx_xvsubwod_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVSUBWOD2))]
+  "ISA_HAS_LASX"
+  "xvsubwod.q.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMULWOD.Q.DU
+;;TODO2
+(define_insn "lasx_xvmulwod_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVMULWOD2))]
+  "ISA_HAS_LASX"
+  "xvmulwod.q.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWEV.H.BU.B   XVMULWEV.H.BU.B
+(define_insn "lasx_xv<optab>wev_h_bu_b"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(addmul:V16HI
+	  (zero_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)
+			 (const_int 16) (const_int 18)
+			 (const_int 20) (const_int 22)
+			 (const_int 24) (const_int 26)
+			 (const_int 28) (const_int 30)])))
+	  (sign_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)
+			 (const_int 16) (const_int 18)
+			 (const_int 20) (const_int 22)
+			 (const_int 24) (const_int 26)
+			 (const_int 28) (const_int 30)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wev.h.bu.b\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V16HI")])
+
+;;XVADDWEV.W.HU.H   XVMULWEV.W.HU.H
+(define_insn "lasx_xv<optab>wev_w_hu_h"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(addmul:V8SI
+	  (zero_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))
+	  (sign_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)
+			 (const_int 8) (const_int 10)
+			 (const_int 12) (const_int 14)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wev.w.hu.h\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8SI")])
+
+;;XVADDWEV.D.WU.W   XVMULWEV.D.WU.W
+(define_insn "lasx_xv<optab>wev_d_wu_w"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(addmul:V4DI
+	  (zero_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 1 "register_operand" "%f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))
+	  (sign_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 2 "register_operand" "f")
+	      (parallel [(const_int 0) (const_int 2)
+			 (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wev.d.wu.w\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWOD.H.BU.B   XVMULWOD.H.BU.B
+(define_insn "lasx_xv<optab>wod_h_bu_b"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(addmul:V16HI
+	  (zero_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)
+			 (const_int 17) (const_int 19)
+			 (const_int 21) (const_int 23)
+			 (const_int 25) (const_int 27)
+			 (const_int 29) (const_int 31)])))
+	  (sign_extend:V16HI
+	    (vec_select:V16QI
+	      (match_operand:V32QI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)
+			 (const_int 17) (const_int 19)
+			 (const_int 21) (const_int 23)
+			 (const_int 25) (const_int 27)
+			 (const_int 29) (const_int 31)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wod.h.bu.b\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V16HI")])
+
+;;XVADDWOD.W.HU.H   XVMULWOD.W.HU.H
+(define_insn "lasx_xv<optab>wod_w_hu_h"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(addmul:V8SI
+	  (zero_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))
+	  (sign_extend:V8SI
+	    (vec_select:V8HI
+	      (match_operand:V16HI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)
+			 (const_int 9) (const_int 11)
+			 (const_int 13) (const_int 15)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wod.w.hu.h\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V8SI")])
+
+;;XVADDWOD.D.WU.W   XVMULWOD.D.WU.W
+(define_insn "lasx_xv<optab>wod_d_wu_w"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(addmul:V4DI
+	  (zero_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 1 "register_operand" "%f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))
+	  (sign_extend:V4DI
+	    (vec_select:V4SI
+	      (match_operand:V8SI 2 "register_operand" "f")
+	      (parallel [(const_int 1) (const_int 3)
+			 (const_int 5) (const_int 7)])))))]
+  "ISA_HAS_LASX"
+  "xv<optab>wod.d.wu.w\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWEV.H.B   XVMADDWEV.H.BU
+(define_insn "lasx_xvmaddwev_h_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(plus:V16HI
+	  (match_operand:V16HI 1 "register_operand" "0")
+	  (mult:V16HI
+	    (any_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)
+			   (const_int 16) (const_int 18)
+			   (const_int 20) (const_int 22)
+			   (const_int 24) (const_int 26)
+			   (const_int 28) (const_int 30)])))
+	    (any_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)
+			   (const_int 16) (const_int 18)
+			   (const_int 20) (const_int 22)
+			   (const_int 24) (const_int 26)
+			   (const_int 28) (const_int 30)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.h.b<u>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V16HI")])
+
+;;XVMADDWEV.W.H   XVMADDWEV.W.HU
+(define_insn "lasx_xvmaddwev_w_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(plus:V8SI
+	  (match_operand:V8SI 1 "register_operand" "0")
+	  (mult:V8SI
+	    (any_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)])))
+	    (any_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.w.h<u>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8SI")])
+
+;;XVMADDWEV.D.W   XVMADDWEV.D.WU
+(define_insn "lasx_xvmaddwev_d_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(plus:V4DI
+	  (match_operand:V4DI 1 "register_operand" "0")
+	  (mult:V4DI
+	    (any_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)])))
+	    (any_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.d.w<u>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWEV.Q.D
+;;TODO2
+(define_insn "lasx_xvmaddwev_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand:V4DI 3 "register_operand" "f")]
+		     UNSPEC_LASX_XVMADDWEV))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.q.d\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWOD.H.B   XVMADDWOD.H.BU
+(define_insn "lasx_xvmaddwod_h_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(plus:V16HI
+	  (match_operand:V16HI 1 "register_operand" "0")
+	  (mult:V16HI
+	    (any_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)
+			   (const_int 17) (const_int 19)
+			   (const_int 21) (const_int 23)
+			   (const_int 25) (const_int 27)
+			   (const_int 29) (const_int 31)])))
+	    (any_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)
+			   (const_int 17) (const_int 19)
+			   (const_int 21) (const_int 23)
+			   (const_int 25) (const_int 27)
+			   (const_int 29) (const_int 31)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.h.b<u>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V16HI")])
+
+;;XVMADDWOD.W.H   XVMADDWOD.W.HU
+(define_insn "lasx_xvmaddwod_w_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(plus:V8SI
+	  (match_operand:V8SI 1 "register_operand" "0")
+	  (mult:V8SI
+	    (any_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)])))
+	    (any_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.w.h<u>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8SI")])
+
+;;XVMADDWOD.D.W   XVMADDWOD.D.WU
+(define_insn "lasx_xvmaddwod_d_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(plus:V4DI
+	  (match_operand:V4DI 1 "register_operand" "0")
+	  (mult:V4DI
+	    (any_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)])))
+	    (any_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.d.w<u>\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWOD.Q.D
+;;TODO2
+(define_insn "lasx_xvmaddwod_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand:V4DI 3 "register_operand" "f")]
+		     UNSPEC_LASX_XVMADDWOD))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.q.d\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWEV.Q.DU
+;;TODO2
+(define_insn "lasx_xvmaddwev_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand:V4DI 3 "register_operand" "f")]
+		     UNSPEC_LASX_XVMADDWEV2))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.q.du\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWOD.Q.DU
+;;TODO2
+(define_insn "lasx_xvmaddwod_q_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand:V4DI 3 "register_operand" "f")]
+		     UNSPEC_LASX_XVMADDWOD2))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.q.du\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWEV.H.BU.B
+(define_insn "lasx_xvmaddwev_h_bu_b"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(plus:V16HI
+	  (match_operand:V16HI 1 "register_operand" "0")
+	  (mult:V16HI
+	    (zero_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)
+			   (const_int 16) (const_int 18)
+			   (const_int 20) (const_int 22)
+			   (const_int 24) (const_int 26)
+			   (const_int 28) (const_int 30)])))
+	    (sign_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)
+			   (const_int 16) (const_int 18)
+			   (const_int 20) (const_int 22)
+			   (const_int 24) (const_int 26)
+			   (const_int 28) (const_int 30)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.h.bu.b\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V16HI")])
+
+;;XVMADDWEV.W.HU.H
+(define_insn "lasx_xvmaddwev_w_hu_h"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(plus:V8SI
+	  (match_operand:V8SI 1 "register_operand" "0")
+	  (mult:V8SI
+	    (zero_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)])))
+	    (sign_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)
+			   (const_int 8) (const_int 10)
+			   (const_int 12) (const_int 14)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.w.hu.h\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8SI")])
+
+;;XVMADDWEV.D.WU.W
+(define_insn "lasx_xvmaddwev_d_wu_w"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(plus:V4DI
+	  (match_operand:V4DI 1 "register_operand" "0")
+	  (mult:V4DI
+	    (zero_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 2 "register_operand" "%f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)])))
+	    (sign_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 3 "register_operand" "f")
+		(parallel [(const_int 0) (const_int 2)
+			   (const_int 4) (const_int 6)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.d.wu.w\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWEV.Q.DU.D
+;;TODO2
+(define_insn "lasx_xvmaddwev_q_du_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand:V4DI 3 "register_operand" "f")]
+		     UNSPEC_LASX_XVMADDWEV3))]
+  "ISA_HAS_LASX"
+  "xvmaddwev.q.du.d\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWOD.H.BU.B
+(define_insn "lasx_xvmaddwod_h_bu_b"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(plus:V16HI
+	  (match_operand:V16HI 1 "register_operand" "0")
+	  (mult:V16HI
+	    (zero_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)
+			   (const_int 17) (const_int 19)
+			   (const_int 21) (const_int 23)
+			   (const_int 25) (const_int 27)
+			   (const_int 29) (const_int 31)])))
+	    (sign_extend:V16HI
+	      (vec_select:V16QI
+		(match_operand:V32QI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)
+			   (const_int 17) (const_int 19)
+			   (const_int 21) (const_int 23)
+			   (const_int 25) (const_int 27)
+			   (const_int 29) (const_int 31)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.h.bu.b\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V16HI")])
+
+;;XVMADDWOD.W.HU.H
+(define_insn "lasx_xvmaddwod_w_hu_h"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(plus:V8SI
+	  (match_operand:V8SI 1 "register_operand" "0")
+	  (mult:V8SI
+	    (zero_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)])))
+	    (sign_extend:V8SI
+	      (vec_select:V8HI
+		(match_operand:V16HI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)
+			   (const_int 9) (const_int 11)
+			   (const_int 13) (const_int 15)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.w.hu.h\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V8SI")])
+
+;;XVMADDWOD.D.WU.W
+(define_insn "lasx_xvmaddwod_d_wu_w"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(plus:V4DI
+	  (match_operand:V4DI 1 "register_operand" "0")
+	  (mult:V4DI
+	    (zero_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 2 "register_operand" "%f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)])))
+	    (sign_extend:V4DI
+	      (vec_select:V4SI
+		(match_operand:V8SI 3 "register_operand" "f")
+		(parallel [(const_int 1) (const_int 3)
+			   (const_int 5) (const_int 7)]))))))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.d.wu.w\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_fmadd")
+   (set_attr "mode" "V4DI")])
+
+;;XVMADDWOD.Q.DU.D
+;;TODO2
+(define_insn "lasx_xvmaddwod_q_du_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "0")
+		      (match_operand:V4DI 2 "register_operand" "f")
+		      (match_operand:V4DI 3 "register_operand" "f")]
+		     UNSPEC_LASX_XVMADDWOD3))]
+  "ISA_HAS_LASX"
+  "xvmaddwod.q.du.d\t%u0,%u2,%u3"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVHADDW.Q.D
+;;TODO2
+(define_insn "lasx_xvhaddw_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVHADDW_Q_D))]
+  "ISA_HAS_LASX"
+  "xvhaddw.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVHSUBW.Q.D
+;;TODO2
+(define_insn "lasx_xvhsubw_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVHSUBW_Q_D))]
+  "ISA_HAS_LASX"
+  "xvhsubw.q.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVHADDW.QU.DU
+;;TODO2
+(define_insn "lasx_xvhaddw_qu_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVHADDW_QU_DU))]
+  "ISA_HAS_LASX"
+  "xvhaddw.qu.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVHSUBW.QU.DU
+;;TODO2
+(define_insn "lasx_xvhsubw_qu_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVHSUBW_QU_DU))]
+  "ISA_HAS_LASX"
+  "xvhsubw.qu.du\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVROTR.B   XVROTR.H   XVROTR.W   XVROTR.D
+;;TODO-478
+(define_insn "lasx_xvrotr_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand:ILASX 2 "register_operand" "f")]
+		      UNSPEC_LASX_XVROTR))]
+  "ISA_HAS_LASX"
+  "xvrotr.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+;;XVADD.Q
+;;TODO2
+(define_insn "lasx_xvadd_q"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADD_Q))]
+  "ISA_HAS_LASX"
+  "xvadd.q\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVSUB.Q
+;;TODO2
+(define_insn "lasx_xvsub_q"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVSUB_Q))]
+  "ISA_HAS_LASX"
+  "xvsub.q\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVSSRLN.B.H   XVSSRLN.H.W   XVSSRLN.W.D
+(define_insn "lasx_xvssrln_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRLN))]
+  "ISA_HAS_LASX"
+  "xvssrln.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+;;XVREPLVE.B   XVREPLVE.H   XVREPLVE.W   XVREPLVE.D
+(define_insn "lasx_xvreplve_<lasxfmt_f>"
+  [(set (match_operand:LASX 0 "register_operand" "=f")
+	(unspec:LASX [(match_operand:LASX 1 "register_operand" "f")
+		      (match_operand:SI 2 "register_operand" "r")]
+		     UNSPEC_LASX_XVREPLVE))]
+  "ISA_HAS_LASX"
+  "xvreplve.<lasxfmt>\t%u0,%u1,%z2"
+  [(set_attr "type" "simd_splat")
+   (set_attr "mode" "<MODE>")])
+
+;;XVADDWEV.Q.DU.D
+(define_insn "lasx_xvaddwev_q_du_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADDWEV3))]
+  "ISA_HAS_LASX"
+  "xvaddwev.q.du.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVADDWOD.Q.DU.D
+(define_insn "lasx_xvaddwod_q_du_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVADDWOD3))]
+  "ISA_HAS_LASX"
+  "xvaddwod.q.du.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMULWEV.Q.DU.D
+(define_insn "lasx_xvmulwev_q_du_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVMULWEV3))]
+  "ISA_HAS_LASX"
+  "xvmulwev.q.du.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;;XVMULWOD.Q.DU.D
+(define_insn "lasx_xvmulwod_q_du_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")
+		      (match_operand:V4DI 2 "register_operand" "f")]
+		     UNSPEC_LASX_XVMULWOD3))]
+  "ISA_HAS_LASX"
+  "xvmulwod.q.du.d\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvpickve2gr_w<u>"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+	(any_extend:SI
+	  (vec_select:SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (parallel [(match_operand 2 "const_0_to_7_operand" "")]))))]
+  "ISA_HAS_LASX"
+  "xvpickve2gr.w<u>\t%0,%u1,%2"
+  [(set_attr "type" "simd_copy")
+   (set_attr "mode" "V8SI")])
+
+
+(define_insn "lasx_xvmskgez_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(unspec:V32QI [(match_operand:V32QI 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVMSKGEZ))]
+  "ISA_HAS_LASX"
+  "xvmskgez.b\t%u0,%u1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvmsknz_b"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(unspec:V32QI [(match_operand:V32QI 1 "register_operand" "f")]
+		      UNSPEC_LASX_XVMSKNZ))]
+  "ISA_HAS_LASX"
+  "xvmsknz.b\t%u0,%u1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvexth_h<u>_b<u>"
+  [(set (match_operand:V16HI 0 "register_operand" "=f")
+	(any_extend:V16HI
+	  (vec_select:V16QI
+	    (match_operand:V32QI 1 "register_operand" "f")
+	      (parallel [(const_int 16) (const_int 17)
+			 (const_int 18) (const_int 19)
+			 (const_int 20) (const_int 21)
+			 (const_int 22) (const_int 23)
+			 (const_int 24) (const_int 25)
+			 (const_int 26) (const_int 27)
+			 (const_int 28) (const_int 29)
+			 (const_int 30) (const_int 31)]))))]
+  "ISA_HAS_LASX"
+  "xvexth.h<u>.b<u>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V16HI")])
+
+(define_insn "lasx_xvexth_w<u>_h<u>"
+  [(set (match_operand:V8SI 0 "register_operand" "=f")
+	(any_extend:V8SI
+	  (vec_select:V8HI
+	    (match_operand:V16HI 1 "register_operand" "f")
+	    (parallel [(const_int 8) (const_int 9)
+		       (const_int 10) (const_int 11)
+		       (const_int 12) (const_int 13)
+		       (const_int 14) (const_int 15)]))))]
+  "ISA_HAS_LASX"
+  "xvexth.w<u>.h<u>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V8SI")])
+
+(define_insn "lasx_xvexth_d<u>_w<u>"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(any_extend:V4DI
+	  (vec_select:V4SI
+	    (match_operand:V8SI 1 "register_operand" "f")
+	    (parallel [(const_int 4) (const_int 5)
+		       (const_int 6) (const_int 7)]))))]
+  "ISA_HAS_LASX"
+  "xvexth.d<u>.w<u>\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvexth_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVEXTH_Q_D))]
+  "ISA_HAS_LASX"
+  "xvexth.q.d\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvexth_qu_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVEXTH_QU_DU))]
+  "ISA_HAS_LASX"
+  "xvexth.qu.du\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvrotri_<lasxfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(rotatert:ILASX (match_operand:ILASX 1 "register_operand" "f")
+		       (match_operand 2 "const_<bitimm256>_operand" "")))]
+  "ISA_HAS_LASX"
+  "xvrotri.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvextl_q_d"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVEXTL_Q_D))]
+  "ISA_HAS_LASX"
+  "xvextl.q.d\t%u0,%u1"
+  [(set_attr "type" "simd_fcvt")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvsrlni_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSRLNI))]
+  "ISA_HAS_LASX"
+  "xvsrlni.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrlrni_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSRLRNI))]
+  "ISA_HAS_LASX"
+  "xvsrlrni.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrlni_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRLNI))]
+  "ISA_HAS_LASX"
+  "xvssrlni.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrlni_<lasxfmt_u>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRLNI2))]
+  "ISA_HAS_LASX"
+  "xvssrlni.<lasxfmt_u>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrlrni_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRLRNI))]
+  "ISA_HAS_LASX"
+  "xvssrlrni.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrlrni_<lasxfmt_u>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRLRNI2))]
+  "ISA_HAS_LASX"
+  "xvssrlrni.<lasxfmt_u>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrani_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSRANI))]
+  "ISA_HAS_LASX"
+  "xvsrani.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvsrarni_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSRARNI))]
+  "ISA_HAS_LASX"
+  "xvsrarni.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrani_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRANI))]
+  "ISA_HAS_LASX"
+  "xvssrani.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrani_<lasxfmt_u>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRANI2))]
+  "ISA_HAS_LASX"
+  "xvssrani.<lasxfmt_u>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrarni_<lasxfmt>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRARNI))]
+  "ISA_HAS_LASX"
+  "xvssrarni.<lasxfmt>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrarni_<lasxfmt_u>_<dlasxqfmt>"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(unspec:ILASX [(match_operand:ILASX 1 "register_operand" "0")
+		       (match_operand:ILASX 2 "register_operand" "f")
+		       (match_operand 3 "const_uimm8_operand" "")]
+		      UNSPEC_LASX_XVSSRARNI2))]
+  "ISA_HAS_LASX"
+  "xvssrarni.<lasxfmt_u>.<dlasxqfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shift")
+   (set_attr "mode" "<MODE>")])
+
+(define_mode_attr VDOUBLEMODEW256
+  [(V8SI "V16SI")
+   (V8SF "V16SF")])
+
+(define_insn "lasx_xvpermi_<lasxfmt_f_wd>"
+  [(set (match_operand:LASX_W 0 "register_operand" "=f")
+    (unspec:LASX_W [(match_operand:LASX_W 1 "register_operand" "0")
+               (match_operand:LASX_W 2 "register_operand" "f")
+                   (match_operand 3 "const_uimm8_operand" "")]
+             UNSPEC_LASX_XVPERMI))]
+  "ISA_HAS_LASX"
+  "xvpermi.w\t%u0,%u2,%3"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvpermi_<lasxfmt_f_wd>_1"
+  [(set (match_operand:LASX_W 0 "register_operand" "=f")
+     (vec_select:LASX_W
+       (vec_concat:<VDOUBLEMODEW256>
+         (match_operand:LASX_W 1 "register_operand" "f")
+         (match_operand:LASX_W 2 "register_operand" "0"))
+       (parallel [(match_operand 3  "const_0_to_3_operand")
+              (match_operand 4  "const_0_to_3_operand"  )
+              (match_operand 5  "const_8_to_11_operand" )
+              (match_operand 6  "const_8_to_11_operand" )
+              (match_operand 7  "const_4_to_7_operand"  )
+              (match_operand 8  "const_4_to_7_operand"  )
+              (match_operand 9  "const_12_to_15_operand")
+              (match_operand 10 "const_12_to_15_operand")])))]
+  "ISA_HAS_LASX
+  && INTVAL (operands[3]) + 4 == INTVAL (operands[7])
+  && INTVAL (operands[4]) + 4 == INTVAL (operands[8])
+  && INTVAL (operands[5]) + 4 == INTVAL (operands[9])
+  && INTVAL (operands[6]) + 4 == INTVAL (operands[10])"
+{
+  int mask = 0;
+  mask |= INTVAL (operands[3]) << 0;
+  mask |= INTVAL (operands[4]) << 2;
+  mask |= (INTVAL (operands[5]) - 8) << 4;
+  mask |= (INTVAL (operands[6]) - 8) << 6;
+  operands[3] = GEN_INT (mask);
+
+  return "xvpermi.w\t%u0,%u1,%3";
+}
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "lasx_xvld"
+  [(match_operand:V32QI 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq12b_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (operands[0], gen_rtx_MEM (V32QImode, addr));
+  DONE;
+})
+
+(define_expand "lasx_xvst"
+  [(match_operand:V32QI 0 "register_operand")
+   (match_operand 1 "pmode_register_operand")
+   (match_operand 2 "aq12b_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx addr = plus_constant (GET_MODE (operands[1]), operands[1],
+			    INTVAL (operands[2]));
+  loongarch_emit_move (gen_rtx_MEM (V32QImode, addr), operands[0]);
+  DONE;
+})
+
+(define_expand "lasx_xvstelm_<lasxfmt_f>"
+  [(match_operand:LASX 0 "register_operand")
+   (match_operand 3 "const_<indeximm256>_operand")
+   (match_operand 2 "aq8<lasxfmt>_operand")
+   (match_operand 1 "pmode_register_operand")]
+  "ISA_HAS_LASX"
+{
+  emit_insn (gen_lasx_xvstelm_<lasxfmt_f>_insn
+	     (operands[1], operands[2], operands[0], operands[3]));
+  DONE;
+})
+
+(define_insn "lasx_xvstelm_<lasxfmt_f>_insn"
+  [(set (mem:<UNITMODE> (plus:DI (match_operand:DI 0 "register_operand" "r")
+				 (match_operand 1 "aq8<lasxfmt>_operand")))
+	(vec_select:<UNITMODE>
+	  (match_operand:LASX 2 "register_operand" "f")
+	  (parallel [(match_operand 3 "const_<indeximm256>_operand" "")])))]
+  "ISA_HAS_LASX"
+{
+  return "xvstelm.<lasxfmt>\t%u2,%0,%1,%3";
+}
+  [(set_attr "type" "simd_store")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+;; Offset is "0"
+(define_insn "lasx_xvstelm_<lasxfmt_f>_insn_0"
+  [(set (mem:<UNITMODE> (match_operand:DI 0 "register_operand" "r"))
+    (vec_select:<UNITMODE>
+      (match_operand:LASX_WD 1 "register_operand" "f")
+      (parallel [(match_operand:SI 2 "const_<indeximm256>_operand")])))]
+  "ISA_HAS_LASX"
+{
+    return "xvstelm.<lasxfmt>\t%u1,%0,0,%2";
+}
+  [(set_attr "type" "simd_store")
+   (set_attr "mode" "<MODE>")
+   (set_attr "length" "4")])
+
+(define_insn "lasx_xvinsve0_<lasxfmt_f>"
+  [(set (match_operand:LASX_WD 0 "register_operand" "=f")
+	(unspec:LASX_WD [(match_operand:LASX_WD 1 "register_operand" "0")
+			 (match_operand:LASX_WD 2 "register_operand" "f")
+			 (match_operand 3 "const_<indeximm256>_operand" "")]
+			UNSPEC_LASX_XVINSVE0))]
+  "ISA_HAS_LASX"
+  "xvinsve0.<lasxfmt>\t%u0,%u2,%3"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvinsve0_<lasxfmt_f>_scalar"
+  [(set (match_operand:FLASX 0 "register_operand" "=f")
+    (vec_merge:FLASX
+      (vec_duplicate:FLASX
+        (match_operand:<UNITMODE> 1 "register_operand" "f"))
+      (match_operand:FLASX 2 "register_operand" "0")
+      (match_operand 3 "const_<bitmask256>_operand" "")))]
+  "ISA_HAS_LASX"
+  "xvinsve0.<lasxfmt>\t%u0,%u1,%y3"
+  [(set_attr "type" "simd_insert")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvpickve_<lasxfmt_f>"
+  [(set (match_operand:LASX_WD 0 "register_operand" "=f")
+	(unspec:LASX_WD [(match_operand:LASX_WD 1 "register_operand" "f")
+			 (match_operand 2 "const_<indeximm256>_operand" "")]
+			UNSPEC_LASX_XVPICKVE))]
+  "ISA_HAS_LASX"
+  "xvpickve.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvpickve_<lasxfmt_f>_scalar"
+  [(set (match_operand:<UNITMODE> 0 "register_operand" "=f")
+	(vec_select:<UNITMODE>
+	 (match_operand:FLASX 1 "register_operand" "f")
+	 (parallel [(match_operand 2 "const_<indeximm256>_operand" "")])))]
+  "ISA_HAS_LASX"
+  "xvpickve.<lasxfmt>\t%u0,%u1,%2"
+  [(set_attr "type" "simd_shf")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvssrlrn_<hlasxfmt>_<lasxfmt>"
+  [(set (match_operand:<VHSMODE256> 0 "register_operand" "=f")
+	(unspec:<VHSMODE256> [(match_operand:ILASX_DWH 1 "register_operand" "f")
+			      (match_operand:ILASX_DWH 2 "register_operand" "f")]
+			     UNSPEC_LASX_XVSSRLRN))]
+  "ISA_HAS_LASX"
+  "xvssrlrn.<hlasxfmt>.<lasxfmt>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "xvorn<mode>3"
+  [(set (match_operand:ILASX 0 "register_operand" "=f")
+	(ior:ILASX (not:ILASX (match_operand:ILASX 2 "register_operand" "f"))
+		   (match_operand:ILASX 1 "register_operand" "f")))]
+  "ISA_HAS_LASX"
+  "xvorn.v\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_logic")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "lasx_xvextl_qu_du"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI [(match_operand:V4DI 1 "register_operand" "f")]
+		     UNSPEC_LASX_XVEXTL_QU_DU))]
+  "ISA_HAS_LASX"
+  "xvextl.qu.du\t%u0,%u1"
+  [(set_attr "type" "simd_bit")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvldi"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+	(unspec:V4DI[(match_operand 1 "const_imm13_operand")]
+		    UNSPEC_LASX_XVLDI))]
+  "ISA_HAS_LASX"
+{
+  HOST_WIDE_INT val = INTVAL (operands[1]);
+  if (val < 0)
+    {
+      HOST_WIDE_INT modeVal = (val & 0xf00) >> 8;
+      if (modeVal < 13)
+	return  "xvldi\t%u0,%1";
+      else
+	{
+	  sorry ("imm13 only support 0000 ~ 1100 in bits '12 ~ 9' when bit '13' is 1");
+	  return "#";
+	}
+    }
+  else
+    return "xvldi\t%u0,%1";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "V4DI")])
+
+(define_insn "lasx_xvldx"
+  [(set (match_operand:V32QI 0 "register_operand" "=f")
+	(unspec:V32QI [(match_operand:DI 1 "register_operand" "r")
+		       (match_operand:DI 2 "reg_or_0_operand" "rJ")]
+		      UNSPEC_LASX_XVLDX))]
+  "ISA_HAS_LASX"
+{
+  return "xvldx\t%u0,%1,%z2";
+}
+  [(set_attr "type" "simd_load")
+   (set_attr "mode" "V32QI")])
+
+(define_insn "lasx_xvstx"
+  [(set (mem:V32QI (plus:DI (match_operand:DI 1 "register_operand" "r")
+			    (match_operand:DI 2 "reg_or_0_operand" "rJ")))
+	(unspec: V32QI[(match_operand:V32QI 0 "register_operand" "f")]
+		      UNSPEC_LASX_XVSTX))]
+
+  "ISA_HAS_LASX"
+{
+  return "xvstx\t%u0,%1,%z2";
+}
+  [(set_attr "type" "simd_store")
+   (set_attr "mode" "DI")])
+
+(define_insn "vec_widen_<su>mult_even_v8si"
+  [(set (match_operand:V4DI 0 "register_operand" "=f")
+    (mult:V4DI
+      (any_extend:V4DI
+        (vec_select:V4SI
+          (match_operand:V8SI 1 "register_operand" "%f")
+          (parallel [(const_int 0) (const_int 2)
+                         (const_int 4) (const_int 6)])))
+      (any_extend:V4DI
+        (vec_select:V4SI
+          (match_operand:V8SI 2 "register_operand" "f")
+          (parallel [(const_int 0) (const_int 2)
+             (const_int 4) (const_int 6)])))))]
+  "ISA_HAS_LASX"
+  "xvmulwev.d.w<u>\t%u0,%u1,%u2"
+  [(set_attr "type" "simd_int_arith")
+   (set_attr "mode" "V4DI")])
+
+;; Vector reduction operation
+(define_expand "reduc_plus_scal_v4di"
+  [(match_operand:DI 0 "register_operand")
+   (match_operand:V4DI 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (V4DImode);
+  rtx tmp1 = gen_reg_rtx (V4DImode);
+  rtx vec_res = gen_reg_rtx (V4DImode);
+  emit_insn (gen_lasx_xvhaddw_q_d (tmp, operands[1], operands[1]));
+  emit_insn (gen_lasx_xvpermi_d_v4di (tmp1, tmp, GEN_INT (2)));
+  emit_insn (gen_addv4di3 (vec_res, tmp, tmp1));
+  emit_insn (gen_vec_extractv4didi (operands[0], vec_res, const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_plus_scal_v8si"
+  [(match_operand:SI 0 "register_operand")
+   (match_operand:V8SI 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (V4DImode);
+  rtx tmp1 = gen_reg_rtx (V4DImode);
+  rtx vec_res = gen_reg_rtx (V4DImode);
+  emit_insn (gen_lasx_xvhaddw_d_w (tmp, operands[1], operands[1]));
+  emit_insn (gen_lasx_xvhaddw_q_d (tmp1, tmp, tmp));
+  emit_insn (gen_lasx_xvpermi_d_v4di (tmp, tmp1, GEN_INT (2)));
+  emit_insn (gen_addv4di3 (vec_res, tmp, tmp1));
+  emit_insn (gen_vec_extractv8sisi (operands[0], gen_lowpart (V8SImode,vec_res),
+				    const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_plus_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:FLASX 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_add<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_<optab>_scal_<mode>"
+  [(any_bitwise:<UNITMODE>
+     (match_operand:<UNITMODE> 0 "register_operand")
+     (match_operand:ILASX 1 "register_operand"))]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_<optab><mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_smax_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:LASX 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_smax<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_smin_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:LASX 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_smin<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_umax_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:ILASX 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_umax<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
+
+(define_expand "reduc_umin_scal_<mode>"
+  [(match_operand:<UNITMODE> 0 "register_operand")
+   (match_operand:ILASX 1 "register_operand")]
+  "ISA_HAS_LASX"
+{
+  rtx tmp = gen_reg_rtx (<MODE>mode);
+  loongarch_expand_vector_reduc (gen_umin<mode>3, tmp, operands[1]);
+  emit_insn (gen_vec_extract<mode><unitmode> (operands[0], tmp,
+					      const0_rtx));
+  DONE;
+})
diff --git a/gcc/config/loongarch/loongarch-modes.def b/gcc/config/loongarch/loongarch-modes.def
index 6f57b60525d..68a829316f4 100644
--- a/gcc/config/loongarch/loongarch-modes.def
+++ b/gcc/config/loongarch/loongarch-modes.def
@@ -33,6 +33,7 @@ VECTOR_MODES (FLOAT, 8);      /*       V4HF V2SF */
 VECTOR_MODES (INT, 16);	      /* V16QI V8HI V4SI V2DI */
 VECTOR_MODES (FLOAT, 16);     /*	    V4SF V2DF */
 
+/* For LARCH LASX 256 bits.  */
 VECTOR_MODES (INT, 32);	      /* V32QI V16HI V8SI V4DI */
 VECTOR_MODES (FLOAT, 32);     /*	     V8SF V4DF */
 
diff --git a/gcc/config/loongarch/loongarch-protos.h b/gcc/config/loongarch/loongarch-protos.h
index fc33527cdcf..f4430d0d418 100644
--- a/gcc/config/loongarch/loongarch-protos.h
+++ b/gcc/config/loongarch/loongarch-protos.h
@@ -89,6 +89,8 @@ extern bool loongarch_split_move_insn_p (rtx, rtx);
 extern void loongarch_split_move_insn (rtx, rtx, rtx);
 extern void loongarch_split_128bit_move (rtx, rtx);
 extern bool loongarch_split_128bit_move_p (rtx, rtx);
+extern void loongarch_split_256bit_move (rtx, rtx);
+extern bool loongarch_split_256bit_move_p (rtx, rtx);
 extern void loongarch_split_lsx_copy_d (rtx, rtx, rtx, rtx (*)(rtx, rtx, rtx));
 extern void loongarch_split_lsx_insert_d (rtx, rtx, rtx, rtx);
 extern void loongarch_split_lsx_fill_d (rtx, rtx);
@@ -174,9 +176,11 @@ union loongarch_gen_fn_ptrs
 extern void loongarch_expand_atomic_qihi (union loongarch_gen_fn_ptrs,
 					  rtx, rtx, rtx, rtx, rtx);
 
+extern void loongarch_expand_vector_group_init (rtx, rtx);
 extern void loongarch_expand_vector_init (rtx, rtx);
 extern void loongarch_expand_vec_unpack (rtx op[2], bool, bool);
 extern void loongarch_expand_vec_perm (rtx, rtx, rtx, rtx);
+extern void loongarch_expand_vec_perm_1 (rtx[]);
 extern void loongarch_expand_vector_extract (rtx, rtx, int);
 extern void loongarch_expand_vector_reduc (rtx (*)(rtx, rtx, rtx), rtx, rtx);
 
diff --git a/gcc/config/loongarch/loongarch.cc b/gcc/config/loongarch/loongarch.cc
index 9f4a7d7922b..0e4c0c7d757 100644
--- a/gcc/config/loongarch/loongarch.cc
+++ b/gcc/config/loongarch/loongarch.cc
@@ -1928,7 +1928,7 @@ loongarch_symbol_insns (enum loongarch_symbol_type type, machine_mode mode)
 {
   /* LSX LD.* and ST.* cannot support loading symbols via an immediate
      operand.  */
-  if (LSX_SUPPORTED_MODE_P (mode))
+  if (LSX_SUPPORTED_MODE_P (mode) || LASX_SUPPORTED_MODE_P (mode))
     return 0;
 
   switch (type)
@@ -2061,6 +2061,11 @@ loongarch_valid_offset_p (rtx x, machine_mode mode)
 					loongarch_ldst_scaled_shift (mode)))
     return false;
 
+  /* LASX XVLD.B and XVST.B supports 10-bit signed offsets without shift.  */
+  if (LASX_SUPPORTED_MODE_P (mode)
+      && !loongarch_signed_immediate_p (INTVAL (x), 10, 0))
+    return false;
+
   return true;
 }
 
@@ -2273,7 +2278,9 @@ loongarch_address_insns (rtx x, machine_mode mode, bool might_split_p)
 {
   struct loongarch_address_info addr;
   int factor;
-  bool lsx_p = !might_split_p && LSX_SUPPORTED_MODE_P (mode);
+  bool lsx_p = (!might_split_p
+		&& (LSX_SUPPORTED_MODE_P (mode)
+		    || LASX_SUPPORTED_MODE_P (mode)));
 
   if (!loongarch_classify_address (&addr, x, mode, false))
     return 0;
@@ -2419,7 +2426,8 @@ loongarch_const_insns (rtx x)
       return loongarch_integer_cost (INTVAL (x));
 
     case CONST_VECTOR:
-      if (LSX_SUPPORTED_MODE_P (GET_MODE (x))
+      if ((LSX_SUPPORTED_MODE_P (GET_MODE (x))
+	   || LASX_SUPPORTED_MODE_P (GET_MODE (x)))
 	  && loongarch_const_vector_same_int_p (x, GET_MODE (x), -512, 511))
 	return 1;
       /* Fall through.  */
@@ -3258,10 +3266,11 @@ loongarch_legitimize_move (machine_mode mode, rtx dest, rtx src)
 
   /* Both src and dest are non-registers;  one special case is supported where
      the source is (const_int 0) and the store can source the zero register.
-     LSX is never able to source the zero register directly in
+     LSX and LASX are never able to source the zero register directly in
      memory operations.  */
   if (!register_operand (dest, mode) && !register_operand (src, mode)
-      && (!const_0_operand (src, mode) || LSX_SUPPORTED_MODE_P (mode)))
+      && (!const_0_operand (src, mode)
+	  || LSX_SUPPORTED_MODE_P (mode) || LASX_SUPPORTED_MODE_P (mode)))
     {
       loongarch_emit_move (dest, force_reg (mode, src));
       return true;
@@ -3843,6 +3852,7 @@ loongarch_builtin_vectorization_cost (enum vect_cost_for_stmt type_of_cost,
 				      int misalign ATTRIBUTE_UNUSED)
 {
   unsigned elements;
+  machine_mode mode = vectype != NULL ? TYPE_MODE (vectype) : DImode;
 
   switch (type_of_cost)
     {
@@ -3859,7 +3869,8 @@ loongarch_builtin_vectorization_cost (enum vect_cost_for_stmt type_of_cost,
 	return 1;
 
       case vec_perm:
-	return 1;
+	return LASX_SUPPORTED_MODE_P (mode)
+	  && !LSX_SUPPORTED_MODE_P (mode) ? 2 : 1;
 
       case unaligned_load:
       case vector_gather_load:
@@ -3940,6 +3951,10 @@ loongarch_split_move_p (rtx dest, rtx src)
   if (LSX_SUPPORTED_MODE_P (GET_MODE (dest)))
     return loongarch_split_128bit_move_p (dest, src);
 
+  /* Check if LASX moves need splitting.  */
+  if (LASX_SUPPORTED_MODE_P (GET_MODE (dest)))
+    return loongarch_split_256bit_move_p (dest, src);
+
   /* Otherwise split all multiword moves.  */
   return size > UNITS_PER_WORD;
 }
@@ -3955,6 +3970,8 @@ loongarch_split_move (rtx dest, rtx src, rtx insn_)
   gcc_checking_assert (loongarch_split_move_p (dest, src));
   if (LSX_SUPPORTED_MODE_P (GET_MODE (dest)))
     loongarch_split_128bit_move (dest, src);
+  else if (LASX_SUPPORTED_MODE_P (GET_MODE (dest)))
+    loongarch_split_256bit_move (dest, src);
   else if (FP_REG_RTX_P (dest) || FP_REG_RTX_P (src))
     {
       if (!TARGET_64BIT && GET_MODE (dest) == DImode)
@@ -4120,7 +4137,7 @@ const char *
 loongarch_output_move_index_float (rtx x, machine_mode mode, bool ldr)
 {
   int index = exact_log2 (GET_MODE_SIZE (mode));
-  if (!IN_RANGE (index, 2, 4))
+  if (!IN_RANGE (index, 2, 5))
     return NULL;
 
   struct loongarch_address_info info;
@@ -4129,17 +4146,19 @@ loongarch_output_move_index_float (rtx x, machine_mode mode, bool ldr)
       || !loongarch_legitimate_address_p (mode, x, false))
     return NULL;
 
-  const char *const insn[][3] =
+  const char *const insn[][4] =
     {
 	{
 	  "fstx.s\t%1,%0",
 	  "fstx.d\t%1,%0",
-	  "vstx\t%w1,%0"
+	  "vstx\t%w1,%0",
+	  "xvstx\t%u1,%0"
 	},
 	{
 	  "fldx.s\t%0,%1",
 	  "fldx.d\t%0,%1",
-	  "vldx\t%w0,%1"
+	  "vldx\t%w0,%1",
+	  "xvldx\t%u0,%1"
 	}
     };
 
@@ -4173,6 +4192,34 @@ loongarch_split_128bit_move_p (rtx dest, rtx src)
   return true;
 }
 
+/* Return true if a 256-bit move from SRC to DEST should be split.  */
+
+bool
+loongarch_split_256bit_move_p (rtx dest, rtx src)
+{
+  /* LSX-to-LSX moves can be done in a single instruction.  */
+  if (FP_REG_RTX_P (src) && FP_REG_RTX_P (dest))
+    return false;
+
+  /* Check for LSX loads and stores.  */
+  if (FP_REG_RTX_P (dest) && MEM_P (src))
+    return false;
+  if (FP_REG_RTX_P (src) && MEM_P (dest))
+    return false;
+
+  /* Check for LSX set to an immediate const vector with valid replicated
+     element.  */
+  if (FP_REG_RTX_P (dest)
+      && loongarch_const_vector_same_int_p (src, GET_MODE (src), -512, 511))
+    return false;
+
+  /* Check for LSX load zero immediate.  */
+  if (FP_REG_RTX_P (dest) && src == CONST0_RTX (GET_MODE (src)))
+    return false;
+
+  return true;
+}
+
 /* Split a 128-bit move from SRC to DEST.  */
 
 void
@@ -4264,6 +4311,97 @@ loongarch_split_128bit_move (rtx dest, rtx src)
     }
 }
 
+/* Split a 256-bit move from SRC to DEST.  */
+
+void
+loongarch_split_256bit_move (rtx dest, rtx src)
+{
+  int byte, index;
+  rtx low_dest, low_src, d, s;
+
+  if (FP_REG_RTX_P (dest))
+    {
+      gcc_assert (!MEM_P (src));
+
+      rtx new_dest = dest;
+      if (!TARGET_64BIT)
+	{
+	  if (GET_MODE (dest) != V8SImode)
+	    new_dest = simplify_gen_subreg (V8SImode, dest, GET_MODE (dest), 0);
+	}
+      else
+	{
+	  if (GET_MODE (dest) != V4DImode)
+	    new_dest = simplify_gen_subreg (V4DImode, dest, GET_MODE (dest), 0);
+	}
+
+      for (byte = 0, index = 0; byte < GET_MODE_SIZE (GET_MODE (dest));
+	   byte += UNITS_PER_WORD, index++)
+	{
+	  s = loongarch_subword_at_byte (src, byte);
+	  if (!TARGET_64BIT)
+	    emit_insn (gen_lasx_xvinsgr2vr_w (new_dest, s, new_dest,
+					      GEN_INT (1 << index)));
+	  else
+	    emit_insn (gen_lasx_xvinsgr2vr_d (new_dest, s, new_dest,
+					      GEN_INT (1 << index)));
+	}
+    }
+  else if (FP_REG_RTX_P (src))
+    {
+      gcc_assert (!MEM_P (dest));
+
+      rtx new_src = src;
+      if (!TARGET_64BIT)
+	{
+	  if (GET_MODE (src) != V8SImode)
+	    new_src = simplify_gen_subreg (V8SImode, src, GET_MODE (src), 0);
+	}
+      else
+	{
+	  if (GET_MODE (src) != V4DImode)
+	    new_src = simplify_gen_subreg (V4DImode, src, GET_MODE (src), 0);
+	}
+
+      for (byte = 0, index = 0; byte < GET_MODE_SIZE (GET_MODE (src));
+	   byte += UNITS_PER_WORD, index++)
+	{
+	  d = loongarch_subword_at_byte (dest, byte);
+	  if (!TARGET_64BIT)
+	    emit_insn (gen_lsx_vpickve2gr_w (d, new_src, GEN_INT (index)));
+	  else
+	    emit_insn (gen_lsx_vpickve2gr_d (d, new_src, GEN_INT (index)));
+	}
+    }
+  else
+    {
+      low_dest = loongarch_subword_at_byte (dest, 0);
+      low_src = loongarch_subword_at_byte (src, 0);
+      gcc_assert (REG_P (low_dest) && REG_P (low_src));
+      /* Make sure the source register is not written before reading.  */
+      if (REGNO (low_dest) <= REGNO (low_src))
+	{
+	  for (byte = 0; byte < GET_MODE_SIZE (TImode);
+	       byte += UNITS_PER_WORD)
+	    {
+	      d = loongarch_subword_at_byte (dest, byte);
+	      s = loongarch_subword_at_byte (src, byte);
+	      loongarch_emit_move (d, s);
+	    }
+	}
+      else
+	{
+	  for (byte = GET_MODE_SIZE (TImode) - UNITS_PER_WORD; byte >= 0;
+	       byte -= UNITS_PER_WORD)
+	    {
+	      d = loongarch_subword_at_byte (dest, byte);
+	      s = loongarch_subword_at_byte (src, byte);
+	      loongarch_emit_move (d, s);
+	    }
+	}
+    }
+}
+
 
 /* Split a COPY_S.D with operands DEST, SRC and INDEX.  GEN is a function
    used to generate subregs.  */
@@ -4351,11 +4489,12 @@ loongarch_output_move (rtx dest, rtx src)
   machine_mode mode = GET_MODE (dest);
   bool dbl_p = (GET_MODE_SIZE (mode) == 8);
   bool lsx_p = LSX_SUPPORTED_MODE_P (mode);
+  bool lasx_p = LASX_SUPPORTED_MODE_P (mode);
 
   if (loongarch_split_move_p (dest, src))
     return "#";
 
-  if ((lsx_p)
+  if ((lsx_p || lasx_p)
       && dest_code == REG && FP_REG_P (REGNO (dest))
       && src_code == CONST_VECTOR
       && CONST_INT_P (CONST_VECTOR_ELT (src, 0)))
@@ -4365,6 +4504,8 @@ loongarch_output_move (rtx dest, rtx src)
 	{
 	case 16:
 	  return "vrepli.%v0\t%w0,%E1";
+	case 32:
+	  return "xvrepli.%v0\t%u0,%E1";
 	default: gcc_unreachable ();
 	}
     }
@@ -4379,13 +4520,15 @@ loongarch_output_move (rtx dest, rtx src)
 
 	  if (FP_REG_P (REGNO (dest)))
 	    {
-	      if (lsx_p)
+	      if (lsx_p || lasx_p)
 		{
 		  gcc_assert (src == CONST0_RTX (GET_MODE (src)));
 		  switch (GET_MODE_SIZE (mode))
 		    {
 		    case 16:
 		      return "vrepli.b\t%w0,0";
+		    case 32:
+		      return "xvrepli.b\t%u0,0";
 		    default:
 		      gcc_unreachable ();
 		    }
@@ -4518,12 +4661,14 @@ loongarch_output_move (rtx dest, rtx src)
     {
       if (dest_code == REG && FP_REG_P (REGNO (dest)))
 	{
-	  if (lsx_p)
+	  if (lsx_p || lasx_p)
 	    {
 	      switch (GET_MODE_SIZE (mode))
 		{
 		case 16:
 		  return "vori.b\t%w0,%w1,0";
+		case 32:
+		  return "xvori.b\t%u0,%u1,0";
 		default:
 		  gcc_unreachable ();
 		}
@@ -4541,12 +4686,14 @@ loongarch_output_move (rtx dest, rtx src)
 	  if (insn)
 	    return insn;
 
-	  if (lsx_p)
+	  if (lsx_p || lasx_p)
 	    {
 	      switch (GET_MODE_SIZE (mode))
 		{
 		case 16:
 		  return "vst\t%w1,%0";
+		case 32:
+		  return "xvst\t%u1,%0";
 		default:
 		  gcc_unreachable ();
 		}
@@ -4567,12 +4714,14 @@ loongarch_output_move (rtx dest, rtx src)
 	  if (insn)
 	    return insn;
 
-	  if (lsx_p)
+	  if (lsx_p || lasx_p)
 	    {
 	      switch (GET_MODE_SIZE (mode))
 		{
 		case 16:
 		  return "vld\t%w0,%1";
+		case 32:
+		  return "xvld\t%u0,%1";
 		default:
 		  gcc_unreachable ();
 		}
@@ -5545,18 +5694,27 @@ loongarch_print_operand_reloc (FILE *file, rtx op, bool hi64_part,
    'T'	Print 'f' for (eq:CC ...), 't' for (ne:CC ...),
 	      'z' for (eq:?I ...), 'n' for (ne:?I ...).
    't'	Like 'T', but with the EQ/NE cases reversed
-   'V'	Print exact log2 of CONST_INT OP element 0 of a replicated
-	  CONST_VECTOR in decimal.
+   'F'	Print the FPU branch condition for comparison OP.
+   'W'	Print the inverse of the FPU branch condition for comparison OP.
+   'w'	Print a LSX register.
+   'u'	Print a LASX register.
+   'T'	Print 'f' for (eq:CC ...), 't' for (ne:CC ...),
+	      'z' for (eq:?I ...), 'n' for (ne:?I ...).
+   't'	Like 'T', but with the EQ/NE cases reversed
+   'Y'	Print loongarch_fp_conditions[INTVAL (OP)]
+   'Z'	Print OP and a comma for 8CC, otherwise print nothing.
+   'z'	Print $0 if OP is zero, otherwise print OP normally.
    'v'	Print the insn size suffix b, h, w or d for vector modes V16QI, V8HI,
 	  V4SI, V2SI, and w, d for vector modes V4SF, V2DF respectively.
+   'V'	Print exact log2 of CONST_INT OP element 0 of a replicated
+	  CONST_VECTOR in decimal.
    'W'	Print the inverse of the FPU branch condition for comparison OP.
-   'w'	Print a LSX register.
    'X'	Print CONST_INT OP in hexadecimal format.
    'x'	Print the low 16 bits of CONST_INT OP in hexadecimal format.
    'Y'	Print loongarch_fp_conditions[INTVAL (OP)]
    'y'	Print exact log2 of CONST_INT OP in decimal.
    'Z'	Print OP and a comma for 8CC, otherwise print nothing.
-   'z'	Print $r0 if OP is zero, otherwise print OP normally.  */
+   'z'	Print $0 if OP is zero, otherwise print OP normally.  */
 
 static void
 loongarch_print_operand (FILE *file, rtx op, int letter)
@@ -5698,46 +5856,11 @@ loongarch_print_operand (FILE *file, rtx op, int letter)
 	output_operand_lossage ("invalid use of '%%%c'", letter);
       break;
 
-    case 'v':
-      switch (GET_MODE (op))
-	{
-	case E_V16QImode:
-	case E_V32QImode:
-	  fprintf (file, "b");
-	  break;
-	case E_V8HImode:
-	case E_V16HImode:
-	  fprintf (file, "h");
-	  break;
-	case E_V4SImode:
-	case E_V4SFmode:
-	case E_V8SImode:
-	case E_V8SFmode:
-	  fprintf (file, "w");
-	  break;
-	case E_V2DImode:
-	case E_V2DFmode:
-	case E_V4DImode:
-	case E_V4DFmode:
-	  fprintf (file, "d");
-	  break;
-	default:
-	  output_operand_lossage ("invalid use of '%%%c'", letter);
-	}
-      break;
-
     case 'W':
       loongarch_print_float_branch_condition (file, reverse_condition (code),
 					      letter);
       break;
 
-    case 'w':
-      if (code == REG && LSX_REG_P (REGNO (op)))
-	fprintf (file, "$vr%s", &reg_names[REGNO (op)][2]);
-      else
-	output_operand_lossage ("invalid use of '%%%c'", letter);
-      break;
-
     case 'x':
       if (CONST_INT_P (op))
 	fprintf (file, HOST_WIDE_INT_PRINT_HEX, INTVAL (op) & 0xffff);
@@ -5779,6 +5902,48 @@ loongarch_print_operand (FILE *file, rtx op, int letter)
       fputc (',', file);
       break;
 
+    case 'w':
+      if (code == REG && LSX_REG_P (REGNO (op)))
+	fprintf (file, "$vr%s", &reg_names[REGNO (op)][2]);
+      else
+	output_operand_lossage ("invalid use of '%%%c'", letter);
+      break;
+
+    case 'u':
+      if (code == REG && LASX_REG_P (REGNO (op)))
+	fprintf (file, "$xr%s", &reg_names[REGNO (op)][2]);
+      else
+	output_operand_lossage ("invalid use of '%%%c'", letter);
+      break;
+
+    case 'v':
+      switch (GET_MODE (op))
+	{
+	case E_V16QImode:
+	case E_V32QImode:
+	  fprintf (file, "b");
+	  break;
+	case E_V8HImode:
+	case E_V16HImode:
+	  fprintf (file, "h");
+	  break;
+	case E_V4SImode:
+	case E_V4SFmode:
+	case E_V8SImode:
+	case E_V8SFmode:
+	  fprintf (file, "w");
+	  break;
+	case E_V2DImode:
+	case E_V2DFmode:
+	case E_V4DImode:
+	case E_V4DFmode:
+	  fprintf (file, "d");
+	  break;
+	default:
+	  output_operand_lossage ("invalid use of '%%%c'", letter);
+	}
+      break;
+
     default:
       switch (code)
 	{
@@ -6109,13 +6274,18 @@ loongarch_hard_regno_mode_ok_uncached (unsigned int regno, machine_mode mode)
   size = GET_MODE_SIZE (mode);
   mclass = GET_MODE_CLASS (mode);
 
-  if (GP_REG_P (regno) && !LSX_SUPPORTED_MODE_P (mode))
+  if (GP_REG_P (regno) && !LSX_SUPPORTED_MODE_P (mode)
+      && !LASX_SUPPORTED_MODE_P (mode))
     return ((regno - GP_REG_FIRST) & 1) == 0 || size <= UNITS_PER_WORD;
 
   /* For LSX, allow TImode and 128-bit vector modes in all FPR.  */
   if (FP_REG_P (regno) && LSX_SUPPORTED_MODE_P (mode))
     return true;
 
+  /* FIXED ME: For LASX, allow TImode and 256-bit vector modes in all FPR.  */
+  if (FP_REG_P (regno) && LASX_SUPPORTED_MODE_P (mode))
+    return true;
+
   if (FP_REG_P (regno))
     {
       if (mclass == MODE_FLOAT
@@ -6168,6 +6338,9 @@ loongarch_hard_regno_nregs (unsigned int regno, machine_mode mode)
       if (LSX_SUPPORTED_MODE_P (mode))
 	return 1;
 
+      if (LASX_SUPPORTED_MODE_P (mode))
+	return 1;
+
       return (GET_MODE_SIZE (mode) + UNITS_PER_FPREG - 1) / UNITS_PER_FPREG;
     }
 
@@ -6197,7 +6370,10 @@ loongarch_class_max_nregs (enum reg_class rclass, machine_mode mode)
     {
       if (loongarch_hard_regno_mode_ok (FP_REG_FIRST, mode))
 	{
-	  if (LSX_SUPPORTED_MODE_P (mode))
+	  /* Fixed me.  */
+	  if (LASX_SUPPORTED_MODE_P (mode))
+	    size = MIN (size, UNITS_PER_LASX_REG);
+	  else if (LSX_SUPPORTED_MODE_P (mode))
 	    size = MIN (size, UNITS_PER_LSX_REG);
 	  else
 	    size = MIN (size, UNITS_PER_FPREG);
@@ -6215,6 +6391,10 @@ static bool
 loongarch_can_change_mode_class (machine_mode from, machine_mode to,
 				 reg_class_t rclass)
 {
+  /* Allow conversions between different LSX/LASX vector modes.  */
+  if (LASX_SUPPORTED_MODE_P (from) && LASX_SUPPORTED_MODE_P (to))
+    return true;
+
   /* Allow conversions between different LSX vector modes.  */
   if (LSX_SUPPORTED_MODE_P (from) && LSX_SUPPORTED_MODE_P (to))
     return true;
@@ -6238,7 +6418,8 @@ loongarch_mode_ok_for_mov_fmt_p (machine_mode mode)
       return TARGET_HARD_FLOAT && TARGET_DOUBLE_FLOAT;
 
     default:
-      return LSX_SUPPORTED_MODE_P (mode);
+      return ISA_HAS_LASX ? LASX_SUPPORTED_MODE_P (mode)
+	: LSX_SUPPORTED_MODE_P (mode);
     }
 }
 
@@ -6440,7 +6621,8 @@ loongarch_valid_pointer_mode (scalar_int_mode mode)
 static bool
 loongarch_vector_mode_supported_p (machine_mode mode)
 {
-  return LSX_SUPPORTED_MODE_P (mode);
+  return ISA_HAS_LASX ? LASX_SUPPORTED_MODE_P (mode)
+    : LSX_SUPPORTED_MODE_P (mode);
 }
 
 /* Implement TARGET_SCALAR_MODE_SUPPORTED_P.  */
@@ -6466,19 +6648,19 @@ loongarch_preferred_simd_mode (scalar_mode mode)
   switch (mode)
     {
     case E_QImode:
-      return E_V16QImode;
+      return ISA_HAS_LASX ? E_V32QImode : E_V16QImode;
     case E_HImode:
-      return E_V8HImode;
+      return ISA_HAS_LASX ? E_V16HImode : E_V8HImode;
     case E_SImode:
-      return E_V4SImode;
+      return ISA_HAS_LASX ? E_V8SImode : E_V4SImode;
     case E_DImode:
-      return E_V2DImode;
+      return ISA_HAS_LASX ? E_V4DImode : E_V2DImode;
 
     case E_SFmode:
-      return E_V4SFmode;
+      return ISA_HAS_LASX ? E_V8SFmode : E_V4SFmode;
 
     case E_DFmode:
-      return E_V2DFmode;
+      return ISA_HAS_LASX ? E_V4DFmode : E_V2DFmode;
 
     default:
       break;
@@ -6489,7 +6671,12 @@ loongarch_preferred_simd_mode (scalar_mode mode)
 static unsigned int
 loongarch_autovectorize_vector_modes (vector_modes *modes, bool)
 {
-  if (ISA_HAS_LSX)
+  if (ISA_HAS_LASX)
+    {
+      modes->safe_push (V32QImode);
+      modes->safe_push (V16QImode);
+    }
+  else if (ISA_HAS_LSX)
     {
       modes->safe_push (V16QImode);
     }
@@ -6669,11 +6856,18 @@ const char *
 loongarch_lsx_output_division (const char *division, rtx *operands)
 {
   const char *s;
+  machine_mode mode = GET_MODE (*operands);
 
   s = division;
   if (TARGET_CHECK_ZERO_DIV)
     {
-      if (ISA_HAS_LSX)
+      if (ISA_HAS_LASX && GET_MODE_SIZE (mode) == 32)
+	{
+	  output_asm_insn ("xvsetallnez.%v0\t$fcc7,%u2",operands);
+	  output_asm_insn (s, operands);
+	  output_asm_insn ("bcnez\t$fcc7,1f", operands);
+	}
+      else if (ISA_HAS_LSX)
 	{
 	  output_asm_insn ("vsetallnez.%v0\t$fcc7,%w2",operands);
 	  output_asm_insn (s, operands);
@@ -7501,7 +7695,7 @@ loongarch_expand_lsx_shuffle (struct expand_vec_perm_d *d)
   rtx_insn *insn;
   unsigned i;
 
-  if (!ISA_HAS_LSX)
+  if (!ISA_HAS_LSX && !ISA_HAS_LASX)
     return false;
 
   for (i = 0; i < d->nelt; i++)
@@ -7525,49 +7719,422 @@ loongarch_expand_lsx_shuffle (struct expand_vec_perm_d *d)
   return true;
 }
 
-void
-loongarch_expand_vec_perm (rtx target, rtx op0, rtx op1, rtx sel)
+/* Try to simplify a two vector permutation using 2 intra-lane interleave
+   insns and cross-lane shuffle for 32-byte vectors.  */
+
+static bool
+loongarch_expand_vec_perm_interleave (struct expand_vec_perm_d *d)
 {
-  machine_mode vmode = GET_MODE (target);
+  unsigned i, nelt;
+  rtx t1,t2,t3;
+  rtx (*gen_high) (rtx, rtx, rtx);
+  rtx (*gen_low) (rtx, rtx, rtx);
+  machine_mode mode = GET_MODE (d->target);
 
-  gcc_checking_assert (vmode == E_V16QImode
-      || vmode == E_V2DImode || vmode == E_V2DFmode
-      || vmode == E_V4SImode || vmode == E_V4SFmode
-      || vmode == E_V8HImode);
-  gcc_checking_assert (GET_MODE (op0) == vmode);
-  gcc_checking_assert (GET_MODE (op1) == vmode);
-  gcc_checking_assert (GET_MODE (sel) == vmode);
-  gcc_checking_assert (ISA_HAS_LSX);
+  if (d->one_vector_p)
+    return false;
+  if (TARGET_LASX && GET_MODE_SIZE (d->vmode) == 32)
+    ;
+  else
+    return false;
 
-  switch (vmode)
+  nelt = d->nelt;
+  if (d->perm[0] != 0 && d->perm[0] != nelt / 2)
+    return false;
+  for (i = 0; i < nelt; i += 2)
+    if (d->perm[i] != d->perm[0] + i / 2
+	|| d->perm[i + 1] != d->perm[0] + i / 2 + nelt)
+      return false;
+
+  if (d->testing_p)
+    return true;
+
+  switch (d->vmode)
     {
-    case E_V16QImode:
-      emit_insn (gen_lsx_vshuf_b (target, op1, op0, sel));
+    case E_V32QImode:
+      gen_high = gen_lasx_xvilvh_b;
+      gen_low = gen_lasx_xvilvl_b;
       break;
-    case E_V2DFmode:
-      emit_insn (gen_lsx_vshuf_d_f (target, sel, op1, op0));
+    case E_V16HImode:
+      gen_high = gen_lasx_xvilvh_h;
+      gen_low = gen_lasx_xvilvl_h;
       break;
-    case E_V2DImode:
-      emit_insn (gen_lsx_vshuf_d (target, sel, op1, op0));
+    case E_V8SImode:
+      gen_high = gen_lasx_xvilvh_w;
+      gen_low = gen_lasx_xvilvl_w;
       break;
-    case E_V4SFmode:
-      emit_insn (gen_lsx_vshuf_w_f (target, sel, op1, op0));
+    case E_V4DImode:
+      gen_high = gen_lasx_xvilvh_d;
+      gen_low = gen_lasx_xvilvl_d;
       break;
-    case E_V4SImode:
-      emit_insn (gen_lsx_vshuf_w (target, sel, op1, op0));
+    case E_V8SFmode:
+      gen_high = gen_lasx_xvilvh_w_f;
+      gen_low = gen_lasx_xvilvl_w_f;
       break;
-    case E_V8HImode:
-      emit_insn (gen_lsx_vshuf_h (target, sel, op1, op0));
+    case E_V4DFmode:
+      gen_high = gen_lasx_xvilvh_d_f;
+      gen_low = gen_lasx_xvilvl_d_f;
       break;
     default:
-      break;
+      gcc_unreachable ();
     }
-}
 
-static bool
-loongarch_try_expand_lsx_vshuf_const (struct expand_vec_perm_d *d)
-{
-  int i;
+  t1 = gen_reg_rtx (mode);
+  t2 = gen_reg_rtx (mode);
+  emit_insn (gen_high (t1, d->op0, d->op1));
+  emit_insn (gen_low (t2, d->op0, d->op1));
+  if (mode == V4DFmode || mode == V8SFmode)
+    {
+      t3 = gen_reg_rtx (V4DFmode);
+      if (d->perm[0])
+	emit_insn (gen_lasx_xvpermi_q_v4df (t3, gen_lowpart (V4DFmode, t1),
+					    gen_lowpart (V4DFmode, t2),
+					    GEN_INT (0x31)));
+      else
+	emit_insn (gen_lasx_xvpermi_q_v4df (t3, gen_lowpart (V4DFmode, t1),
+					    gen_lowpart (V4DFmode, t2),
+					    GEN_INT (0x20)));
+    }
+  else
+    {
+      t3 = gen_reg_rtx (V4DImode);
+      if (d->perm[0])
+	emit_insn (gen_lasx_xvpermi_q_v4di (t3, gen_lowpart (V4DImode, t1),
+					    gen_lowpart (V4DImode, t2),
+					    GEN_INT (0x31)));
+      else
+	emit_insn (gen_lasx_xvpermi_q_v4di (t3, gen_lowpart (V4DImode, t1),
+					    gen_lowpart (V4DImode, t2),
+					    GEN_INT (0x20)));
+    }
+  emit_move_insn (d->target, gen_lowpart (mode, t3));
+  return true;
+}
+
+/* Implement extract-even and extract-odd permutations.  */
+
+static bool
+loongarch_expand_vec_perm_even_odd_1 (struct expand_vec_perm_d *d, unsigned odd)
+{
+  rtx t1;
+  machine_mode mode = GET_MODE (d->target);
+  t1 = gen_reg_rtx (mode);
+
+  if (d->testing_p)
+    return true;
+
+  switch (d->vmode)
+    {
+    case E_V4DFmode:
+      /* Shuffle the lanes around into { 0 4 2 6 } and { 1 5 3 7 }.  */
+      if (odd)
+	emit_insn (gen_lasx_xvilvh_d_f (t1, d->op0, d->op1));
+      else
+	emit_insn (gen_lasx_xvilvl_d_f (t1, d->op0, d->op1));
+
+      /* Shuffle within the 256-bit lanes to produce the result required.
+	 { 0 2 4 6 } | { 1 3 5 7 }.  */
+      emit_insn (gen_lasx_xvpermi_d_v4df (d->target, t1, GEN_INT (0xd8)));
+      break;
+
+    case E_V4DImode:
+      if (odd)
+	emit_insn (gen_lasx_xvilvh_d (t1, d->op0, d->op1));
+      else
+	emit_insn (gen_lasx_xvilvl_d (t1, d->op0, d->op1));
+
+      emit_insn (gen_lasx_xvpermi_d_v4di (d->target, t1, GEN_INT (0xd8)));
+      break;
+
+    case E_V8SFmode:
+      /* Shuffle the lanes around into:
+	 { 0 2 8 a 4 6 c e } | { 1 3 9 b 5 7 d f }.  */
+      if (odd)
+	emit_insn (gen_lasx_xvpickod_w_f (t1, d->op0, d->op1));
+      else
+	emit_insn (gen_lasx_xvpickev_w_f (t1, d->op0, d->op1));
+
+      /* Shuffle within the 256-bit lanes to produce the result required.
+	 { 0 2 4 6 8 a c e } | { 1 3 5 7 9 b d f }.  */
+      emit_insn (gen_lasx_xvpermi_d_v8sf (d->target, t1, GEN_INT (0xd8)));
+      break;
+
+    case E_V8SImode:
+      if (odd)
+	emit_insn (gen_lasx_xvpickod_w (t1, d->op0, d->op1));
+      else
+	emit_insn (gen_lasx_xvpickev_w (t1, d->op0, d->op1));
+
+      emit_insn (gen_lasx_xvpermi_d_v8si (d->target, t1, GEN_INT (0xd8)));
+      break;
+
+    case E_V16HImode:
+      if (odd)
+	emit_insn (gen_lasx_xvpickod_h (t1, d->op0, d->op1));
+      else
+	emit_insn (gen_lasx_xvpickev_h (t1, d->op0, d->op1));
+
+      emit_insn (gen_lasx_xvpermi_d_v16hi (d->target, t1, GEN_INT (0xd8)));
+      break;
+
+    case E_V32QImode:
+      if (odd)
+	emit_insn (gen_lasx_xvpickod_b (t1, d->op0, d->op1));
+      else
+	emit_insn (gen_lasx_xvpickev_b (t1, d->op0, d->op1));
+
+      emit_insn (gen_lasx_xvpermi_d_v32qi (d->target, t1, GEN_INT (0xd8)));
+      break;
+
+    default:
+      gcc_unreachable ();
+    }
+
+  return true;
+}
+
+/* Pattern match extract-even and extract-odd permutations.  */
+
+static bool
+loongarch_expand_vec_perm_even_odd (struct expand_vec_perm_d *d)
+{
+  unsigned i, odd, nelt = d->nelt;
+  if (!TARGET_LASX)
+    return false;
+
+  odd = d->perm[0];
+  if (odd != 0 && odd != 1)
+    return false;
+
+  for (i = 1; i < nelt; ++i)
+    if (d->perm[i] != 2 * i + odd)
+      return false;
+
+  return loongarch_expand_vec_perm_even_odd_1 (d, odd);
+}
+
+/* Expand a variable vector permutation for LASX.  */
+
+void
+loongarch_expand_vec_perm_1 (rtx operands[])
+{
+  rtx target = operands[0];
+  rtx op0 = operands[1];
+  rtx op1 = operands[2];
+  rtx mask = operands[3];
+
+  bool one_operand_shuffle = rtx_equal_p (op0, op1);
+  rtx t1 = NULL;
+  rtx t2 = NULL;
+  rtx t3, t4, t5, t6, vt = NULL;
+  rtx vec[32] = {NULL};
+  machine_mode mode = GET_MODE (op0);
+  machine_mode maskmode = GET_MODE (mask);
+  int w, i;
+
+  /* Number of elements in the vector.  */
+  w = GET_MODE_NUNITS (mode);
+
+  if (mode == V4DImode || mode == V4DFmode)
+    {
+      maskmode = mode = V8SImode;
+      w = 8;
+      t1 = gen_reg_rtx (maskmode);
+
+      /* Replicate the low bits of the V4DImode mask into V8SImode:
+	 mask = { A B C D }
+	 t1 = { A A B B C C D D }.  */
+      for (i = 0; i < w / 2; ++i)
+	vec[i*2 + 1] = vec[i*2] = GEN_INT (i * 2);
+      vt = gen_rtx_CONST_VECTOR (maskmode, gen_rtvec_v (w, vec));
+      vt = force_reg (maskmode, vt);
+      mask = gen_lowpart (maskmode, mask);
+      emit_insn (gen_lasx_xvperm_w (t1, mask, vt));
+
+      /* Multiply the shuffle indicies by two.  */
+      t1 = expand_simple_binop (maskmode, PLUS, t1, t1, t1, 1,
+				OPTAB_DIRECT);
+
+      /* Add one to the odd shuffle indicies:
+	 t1 = { A*2, A*2+1, B*2, B*2+1, ... }.  */
+      for (i = 0; i < w / 2; ++i)
+	{
+	  vec[i * 2] = const0_rtx;
+	  vec[i * 2 + 1] = const1_rtx;
+	}
+      vt = gen_rtx_CONST_VECTOR (maskmode, gen_rtvec_v (w, vec));
+      vt = validize_mem (force_const_mem (maskmode, vt));
+      t1 = expand_simple_binop (maskmode, PLUS, t1, vt, t1, 1,
+				OPTAB_DIRECT);
+
+      /* Continue as if V8SImode (resp.  V32QImode) was used initially.  */
+      operands[3] = mask = t1;
+      target = gen_reg_rtx (mode);
+      op0 = gen_lowpart (mode, op0);
+      op1 = gen_lowpart (mode, op1);
+    }
+
+  switch (mode)
+    {
+    case E_V8SImode:
+      if (one_operand_shuffle)
+	{
+	  emit_insn (gen_lasx_xvperm_w (target, op0, mask));
+	  if (target != operands[0])
+	    emit_move_insn (operands[0],
+			    gen_lowpart (GET_MODE (operands[0]), target));
+	}
+      else
+	{
+	  t1 = gen_reg_rtx (V8SImode);
+	  t2 = gen_reg_rtx (V8SImode);
+	  emit_insn (gen_lasx_xvperm_w (t1, op0, mask));
+	  emit_insn (gen_lasx_xvperm_w (t2, op1, mask));
+	  goto merge_two;
+	}
+      return;
+
+    case E_V8SFmode:
+      mask = gen_lowpart (V8SImode, mask);
+      if (one_operand_shuffle)
+	emit_insn (gen_lasx_xvperm_w_f (target, op0, mask));
+      else
+	{
+	  t1 = gen_reg_rtx (V8SFmode);
+	  t2 = gen_reg_rtx (V8SFmode);
+	  emit_insn (gen_lasx_xvperm_w_f (t1, op0, mask));
+	  emit_insn (gen_lasx_xvperm_w_f (t2, op1, mask));
+	  goto merge_two;
+	}
+      return;
+
+    case E_V16HImode:
+      if (one_operand_shuffle)
+	{
+	  t1 = gen_reg_rtx (V16HImode);
+	  t2 = gen_reg_rtx (V16HImode);
+	  emit_insn (gen_lasx_xvpermi_d_v16hi (t1, op0, GEN_INT (0x44)));
+	  emit_insn (gen_lasx_xvpermi_d_v16hi (t2, op0, GEN_INT (0xee)));
+	  emit_insn (gen_lasx_xvshuf_h (target, mask, t2, t1));
+	}
+      else
+	{
+	  t1 = gen_reg_rtx (V16HImode);
+	  t2 = gen_reg_rtx (V16HImode);
+	  t3 = gen_reg_rtx (V16HImode);
+	  t4 = gen_reg_rtx (V16HImode);
+	  t5 = gen_reg_rtx (V16HImode);
+	  t6 = gen_reg_rtx (V16HImode);
+	  emit_insn (gen_lasx_xvpermi_d_v16hi (t3, op0, GEN_INT (0x44)));
+	  emit_insn (gen_lasx_xvpermi_d_v16hi (t4, op0, GEN_INT (0xee)));
+	  emit_insn (gen_lasx_xvshuf_h (t1, mask, t4, t3));
+	  emit_insn (gen_lasx_xvpermi_d_v16hi (t5, op1, GEN_INT (0x44)));
+	  emit_insn (gen_lasx_xvpermi_d_v16hi (t6, op1, GEN_INT (0xee)));
+	  emit_insn (gen_lasx_xvshuf_h (t2, mask, t6, t5));
+	  goto merge_two;
+	}
+      return;
+
+    case E_V32QImode:
+      if (one_operand_shuffle)
+	{
+	  t1 = gen_reg_rtx (V32QImode);
+	  t2 = gen_reg_rtx (V32QImode);
+	  emit_insn (gen_lasx_xvpermi_d_v32qi (t1, op0, GEN_INT (0x44)));
+	  emit_insn (gen_lasx_xvpermi_d_v32qi (t2, op0, GEN_INT (0xee)));
+	  emit_insn (gen_lasx_xvshuf_b (target, t2, t1, mask));
+	}
+      else
+	{
+	  t1 = gen_reg_rtx (V32QImode);
+	  t2 = gen_reg_rtx (V32QImode);
+	  t3 = gen_reg_rtx (V32QImode);
+	  t4 = gen_reg_rtx (V32QImode);
+	  t5 = gen_reg_rtx (V32QImode);
+	  t6 = gen_reg_rtx (V32QImode);
+	  emit_insn (gen_lasx_xvpermi_d_v32qi (t3, op0, GEN_INT (0x44)));
+	  emit_insn (gen_lasx_xvpermi_d_v32qi (t4, op0, GEN_INT (0xee)));
+	  emit_insn (gen_lasx_xvshuf_b (t1, t4, t3, mask));
+	  emit_insn (gen_lasx_xvpermi_d_v32qi (t5, op1, GEN_INT (0x44)));
+	  emit_insn (gen_lasx_xvpermi_d_v32qi (t6, op1, GEN_INT (0xee)));
+	  emit_insn (gen_lasx_xvshuf_b (t2, t6, t5, mask));
+	  goto merge_two;
+	}
+      return;
+
+    default:
+      gcc_assert (GET_MODE_SIZE (mode) == 32);
+      break;
+    }
+
+merge_two:
+  /* Then merge them together.  The key is whether any given control
+     element contained a bit set that indicates the second word.  */
+  rtx xops[6];
+  mask = operands[3];
+  vt = GEN_INT (w);
+  vt = gen_const_vec_duplicate (maskmode, vt);
+  vt = force_reg (maskmode, vt);
+  mask = expand_simple_binop (maskmode, AND, mask, vt,
+			      NULL_RTX, 0, OPTAB_DIRECT);
+  if (GET_MODE (target) != mode)
+    target = gen_reg_rtx (mode);
+  xops[0] = target;
+  xops[1] = gen_lowpart (mode, t2);
+  xops[2] = gen_lowpart (mode, t1);
+  xops[3] = gen_rtx_EQ (maskmode, mask, vt);
+  xops[4] = mask;
+  xops[5] = vt;
+
+  loongarch_expand_vec_cond_expr (mode, maskmode, xops);
+  if (target != operands[0])
+    emit_move_insn (operands[0],
+		    gen_lowpart (GET_MODE (operands[0]), target));
+}
+
+void
+loongarch_expand_vec_perm (rtx target, rtx op0, rtx op1, rtx sel)
+{
+  machine_mode vmode = GET_MODE (target);
+
+  gcc_checking_assert (vmode == E_V16QImode
+      || vmode == E_V2DImode || vmode == E_V2DFmode
+      || vmode == E_V4SImode || vmode == E_V4SFmode
+      || vmode == E_V8HImode);
+  gcc_checking_assert (GET_MODE (op0) == vmode);
+  gcc_checking_assert (GET_MODE (op1) == vmode);
+  gcc_checking_assert (GET_MODE (sel) == vmode);
+  gcc_checking_assert (ISA_HAS_LSX);
+
+  switch (vmode)
+    {
+    case E_V16QImode:
+      emit_insn (gen_lsx_vshuf_b (target, op1, op0, sel));
+      break;
+    case E_V2DFmode:
+      emit_insn (gen_lsx_vshuf_d_f (target, sel, op1, op0));
+      break;
+    case E_V2DImode:
+      emit_insn (gen_lsx_vshuf_d (target, sel, op1, op0));
+      break;
+    case E_V4SFmode:
+      emit_insn (gen_lsx_vshuf_w_f (target, sel, op1, op0));
+      break;
+    case E_V4SImode:
+      emit_insn (gen_lsx_vshuf_w (target, sel, op1, op0));
+      break;
+    case E_V8HImode:
+      emit_insn (gen_lsx_vshuf_h (target, sel, op1, op0));
+      break;
+    default:
+      break;
+    }
+}
+
+static bool
+loongarch_try_expand_lsx_vshuf_const (struct expand_vec_perm_d *d)
+{
+  int i;
   rtx target, op0, op1, sel, tmp;
   rtx rperm[MAX_VECT_LEN];
 
@@ -7668,25 +8235,1302 @@ loongarch_expand_vec_perm_const_1 (struct expand_vec_perm_d *d)
 	return true;
     }
 
-  if (loongarch_expand_lsx_shuffle (d))
-    return true;
-  return false;
-}
-
-/* Implementation of constant vector permuatation.  This function identifies
- * recognized pattern of permuation selector argument, and use one or more
- * instruction(s) to finish the permutation job correctly.  For unsupported
- * patterns, it will return false.  */
-
-static bool
-loongarch_expand_vec_perm_const_2 (struct expand_vec_perm_d *d)
-{
-  /* Although we have the LSX vec_perm<mode> template, there's still some
-     128bit vector permuatation operations send to vectorize_vec_perm_const.
-     In this case, we just simpliy wrap them by single vshuf.* instruction,
-     because LSX vshuf.* instruction just have the same behavior that GCC
-     expects.  */
-  return loongarch_try_expand_lsx_vshuf_const (d);
+  if (loongarch_expand_lsx_shuffle (d))
+    return true;
+  if (loongarch_expand_vec_perm_even_odd (d))
+    return true;
+  if (loongarch_expand_vec_perm_interleave (d))
+    return true;
+  return false;
+}
+
+/* Following are the assist function for const vector permutation support.  */
+static bool
+loongarch_is_quad_duplicate (struct expand_vec_perm_d *d)
+{
+  if (d->perm[0] >= d->nelt / 2)
+    return false;
+
+  bool result = true;
+  unsigned char lhs = d->perm[0];
+  unsigned char rhs = d->perm[d->nelt / 2];
+
+  if ((rhs - lhs) != d->nelt / 2)
+    return false;
+
+  for (int i = 1; i < d->nelt; i += 1)
+    {
+      if ((i < d->nelt / 2) && (d->perm[i] != lhs))
+	{
+	  result = false;
+	  break;
+	}
+      if ((i > d->nelt / 2) && (d->perm[i] != rhs))
+	{
+	  result = false;
+	  break;
+	}
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_double_duplicate (struct expand_vec_perm_d *d)
+{
+  if (!d->one_vector_p)
+    return false;
+
+  if (d->nelt < 8)
+    return false;
+
+  bool result = true;
+  unsigned char buf = d->perm[0];
+
+  for (int i = 1; i < d->nelt; i += 2)
+    {
+      if (d->perm[i] != buf)
+	{
+	  result = false;
+	  break;
+	}
+      if (d->perm[i - 1] != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += d->nelt / 4;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_odd_extraction (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = 1;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 2;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_even_extraction (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = 0;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 1;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_extraction_permutation (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = d->perm[0];
+
+  if (buf != 0 || buf != d->nelt)
+    return false;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 2;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_center_extraction (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned buf = d->nelt / 2;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 1;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_reversing_permutation (struct expand_vec_perm_d *d)
+{
+  if (!d->one_vector_p)
+    return false;
+
+  bool result = true;
+  unsigned char buf = d->nelt - 1;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (d->perm[i] != buf)
+	{
+	  result = false;
+	  break;
+	}
+
+      buf -= 1;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_di_misalign_extract (struct expand_vec_perm_d *d)
+{
+  if (d->nelt != 4 && d->nelt != 8)
+    return false;
+
+  bool result = true;
+  unsigned char buf;
+
+  if (d->nelt == 4)
+    {
+      buf = 1;
+      for (int i = 0; i < d->nelt; i += 1)
+	{
+	  if (buf != d->perm[i])
+	    {
+	      result = false;
+	      break;
+	    }
+
+	  buf += 1;
+	}
+    }
+  else if (d->nelt == 8)
+    {
+      buf = 2;
+      for (int i = 0; i < d->nelt; i += 1)
+	{
+	  if (buf != d->perm[i])
+	    {
+	      result = false;
+	      break;
+	    }
+
+	  buf += 1;
+	}
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_si_misalign_extract (struct expand_vec_perm_d *d)
+{
+  if (d->vmode != E_V8SImode && d->vmode != E_V8SFmode)
+    return false;
+  bool result = true;
+  unsigned char buf = 1;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 1;
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_lasx_lowpart_interleave (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = 0;
+
+  for (int i = 0;i < d->nelt; i += 2)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 1;
+    }
+
+  if (result)
+    {
+      buf = d->nelt;
+      for (int i = 1; i < d->nelt; i += 2)
+	{
+	  if (buf != d->perm[i])
+	    {
+	      result = false;
+	      break;
+	    }
+	  buf += 1;
+	}
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_lasx_lowpart_interleave_2 (struct expand_vec_perm_d *d)
+{
+  if (d->vmode != E_V32QImode)
+    return false;
+  bool result = true;
+  unsigned char buf = 0;
+
+#define COMPARE_SELECTOR(INIT, BEGIN, END) \
+  buf = INIT; \
+  for (int i = BEGIN; i < END && result; i += 1) \
+    { \
+      if (buf != d->perm[i]) \
+	{ \
+	  result = false; \
+	  break; \
+	} \
+      buf += 1; \
+    }
+
+  COMPARE_SELECTOR (0, 0, 8);
+  COMPARE_SELECTOR (32, 8, 16);
+  COMPARE_SELECTOR (8, 16, 24);
+  COMPARE_SELECTOR (40, 24, 32);
+
+#undef COMPARE_SELECTOR
+  return result;
+}
+
+static bool
+loongarch_is_lasx_lowpart_extract (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = 0;
+
+  for (int i = 0; i < d->nelt / 2; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 1;
+    }
+
+  if (result)
+    {
+      buf = d->nelt;
+      for (int i = d->nelt / 2; i < d->nelt; i += 1)
+	{
+	  if (buf != d->perm[i])
+	    {
+	      result = false;
+	      break;
+	    }
+	  buf += 1;
+	}
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_lasx_highpart_interleave (expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = d->nelt / 2;
+
+  for (int i = 0; i < d->nelt; i += 2)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+      buf += 1;
+    }
+
+  if (result)
+    {
+      buf = d->nelt + d->nelt / 2;
+      for (int i = 1; i < d->nelt;i += 2)
+	{
+	  if (buf != d->perm[i])
+	    {
+	      result = false;
+	      break;
+	    }
+	  buf += 1;
+	}
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_lasx_highpart_interleave_2 (struct expand_vec_perm_d *d)
+{
+  if (d->vmode != E_V32QImode)
+    return false;
+
+  bool result = true;
+  unsigned char buf = 0;
+
+#define COMPARE_SELECTOR(INIT, BEGIN, END) \
+  buf = INIT; \
+  for (int i = BEGIN; i < END && result; i += 1) \
+    { \
+      if (buf != d->perm[i]) \
+	{ \
+	  result = false; \
+	  break; \
+	} \
+      buf += 1; \
+    }
+
+  COMPARE_SELECTOR (16, 0, 8);
+  COMPARE_SELECTOR (48, 8, 16);
+  COMPARE_SELECTOR (24, 16, 24);
+  COMPARE_SELECTOR (56, 24, 32);
+
+#undef COMPARE_SELECTOR
+  return result;
+}
+
+static bool
+loongarch_is_elem_duplicate (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+  unsigned char buf = d->perm[0];
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (buf != d->perm[i])
+	{
+	  result = false;
+	  break;
+	}
+    }
+
+  return result;
+}
+
+inline bool
+loongarch_is_op_reverse_perm (struct expand_vec_perm_d *d)
+{
+  return (d->vmode == E_V4DFmode)
+    && d->perm[0] == 2 && d->perm[1] == 3
+    && d->perm[2] == 0 && d->perm[3] == 1;
+}
+
+static bool
+loongarch_is_single_op_perm (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+
+  for (int i = 0; i < d->nelt; i += 1)
+    {
+      if (d->perm[i] >= d->nelt)
+	{
+	  result = false;
+	  break;
+	}
+    }
+
+  return result;
+}
+
+static bool
+loongarch_is_divisible_perm (struct expand_vec_perm_d *d)
+{
+  bool result = true;
+
+  for (int i = 0; i < d->nelt / 2; i += 1)
+    {
+      if (d->perm[i] >= d->nelt)
+	{
+	  result = false;
+	  break;
+	}
+    }
+
+  if (result)
+    {
+      for (int i = d->nelt / 2; i < d->nelt; i += 1)
+	{
+	  if (d->perm[i] < d->nelt)
+	    {
+	      result = false;
+	      break;
+	    }
+	}
+    }
+
+  return result;
+}
+
+inline bool
+loongarch_is_triple_stride_extract (struct expand_vec_perm_d *d)
+{
+  return (d->vmode == E_V4DImode || d->vmode == E_V4DFmode)
+    && d->perm[0] == 1 && d->perm[1] == 4
+    && d->perm[2] == 7 && d->perm[3] == 0;
+}
+
+/* In LASX, some permutation insn does not have the behavior that gcc expects
+ * when compiler wants to emit a vector permutation.
+ *
+ * 1. What GCC provides via vectorize_vec_perm_const ()'s paramater:
+ * When GCC wants to performs a vector permutation, it provides two op
+ * reigster, one target register, and a selector.
+ * In const vector permutation case, GCC provides selector as a char array
+ * that contains original value; in variable vector permuatation
+ * (performs via vec_perm<mode> insn template), it provides a vector register.
+ * We assume that nelt is the elements numbers inside single vector in current
+ * 256bit vector mode.
+ *
+ * 2. What GCC expects to perform:
+ * Two op registers (op0, op1) will "combine" into a 512bit temp vector storage
+ * that has 2*nelt elements inside it; the low 256bit is op0, and high 256bit
+ * is op1, then the elements are indexed as below:
+ *		  0 ~ nelt - 1		nelt ~ 2 * nelt - 1
+ *	  |-------------------------|-------------------------|
+ *		Low 256bit (op0)	High 256bit (op1)
+ * For example, the second element in op1 (V8SImode) will be indexed with 9.
+ * Selector is a vector that has the same mode and number of elements  with
+ * op0,op1 and target, it's look like this:
+ *	      0 ~ nelt - 1
+ *	  |-------------------------|
+ *	      256bit (selector)
+ * It describes which element from 512bit temp vector storage will fit into
+ * target's every element slot.
+ * GCC expects that every element in selector can be ANY indices of 512bit
+ * vector storage (Selector can pick literally any element from op0 and op1, and
+ * then fits into any place of target register). This is also what LSX 128bit
+ * vshuf.* instruction do similarly, so we can handle 128bit vector permutation
+ * by single instruction easily.
+ *
+ * 3. What LASX permutation instruction does:
+ * In short, it just execute two independent 128bit vector permuatation, and
+ * it's the reason that we need to do the jobs below.  We will explain it.
+ * op0, op1, target, and selector will be separate into high 128bit and low
+ * 128bit, and do permutation as the description below:
+ *
+ *  a) op0's low 128bit and op1's low 128bit "combines" into a 256bit temp
+ * vector storage (TVS1), elements are indexed as below:
+ *	    0 ~ nelt / 2 - 1	  nelt / 2 ~ nelt - 1
+ *	|---------------------|---------------------| TVS1
+ *	    op0's low 128bit      op1's low 128bit
+ *    op0's high 128bit and op1's high 128bit are "combined" into TVS2 in the
+ *    same way.
+ *	    0 ~ nelt / 2 - 1	  nelt / 2 ~ nelt - 1
+ *	|---------------------|---------------------| TVS2
+ *	    op0's high 128bit	op1's high 128bit
+ *  b) Selector's low 128bit describes which elements from TVS1 will fit into
+ *  target vector's low 128bit.  No TVS2 elements are allowed.
+ *  c) Selector's high 128bit describes which elements from TVS2 will fit into
+ *  target vector's high 128bit.  No TVS1 elements are allowed.
+ *
+ * As we can see, if we want to handle vector permutation correctly, we can
+ * achieve it in three ways:
+ *  a) Modify selector's elements, to make sure that every elements can inform
+ *  correct value that will put into target vector.
+    b) Generate extra instruction before/after permutation instruction, for
+    adjusting op vector or target vector, to make sure target vector's value is
+    what GCC expects.
+    c) Use other instructions to process op and put correct result into target.
+ */
+
+/* Implementation of constant vector permuatation.  This function identifies
+ * recognized pattern of permuation selector argument, and use one or more
+ * instruction(s) to finish the permutation job correctly.  For unsupported
+ * patterns, it will return false.  */
+
+static bool
+loongarch_expand_vec_perm_const_2 (struct expand_vec_perm_d *d)
+{
+  /* Although we have the LSX vec_perm<mode> template, there's still some
+     128bit vector permuatation operations send to vectorize_vec_perm_const.
+     In this case, we just simpliy wrap them by single vshuf.* instruction,
+     because LSX vshuf.* instruction just have the same behavior that GCC
+     expects.  */
+  if (d->vmode != E_V32QImode && d->vmode != E_V16HImode
+      && d->vmode != E_V4DImode && d->vmode != E_V4DFmode
+      && d->vmode != E_V8SImode && d->vmode != E_V8SFmode)
+    return loongarch_try_expand_lsx_vshuf_const (d);
+
+  bool ok = false, reverse_hi_lo = false, extract_ev_od = false,
+       use_alt_op = false;
+  unsigned char idx;
+  int i;
+  rtx target, op0, op1, sel, tmp;
+  rtx op0_alt = NULL_RTX, op1_alt = NULL_RTX;
+  rtx rperm[MAX_VECT_LEN];
+  unsigned char remapped[MAX_VECT_LEN];
+
+  /* Try to figure out whether is a recognized permutation selector pattern, if
+     yes, we will reassign some elements with new value in selector argument,
+     and in some cases we will generate some assist insn to complete the
+     permutation. (Even in some cases, we use other insn to impl permutation
+     instead of xvshuf!)
+
+     Make sure to check d->testing_p is false everytime if you want to emit new
+     insn, unless you want to crash into ICE directly.  */
+  if (loongarch_is_quad_duplicate (d))
+    {
+      /* Selector example: E_V8SImode, { 0, 0, 0, 0, 4, 4, 4, 4 }
+	 copy first elem from original selector to all elem in new selector.  */
+      idx = d->perm[0];
+      for (i = 0; i < d->nelt; i += 1)
+	{
+	  remapped[i] = idx;
+	}
+      /* Selector after: { 0, 0, 0, 0, 0, 0, 0, 0 }.  */
+    }
+  else if (loongarch_is_double_duplicate (d))
+    {
+      /* Selector example: E_V8SImode, { 1, 1, 3, 3, 5, 5, 7, 7 }
+	 one_vector_p == true.  */
+      for (i = 0; i < d->nelt / 2; i += 1)
+	{
+	  idx = d->perm[i];
+	  remapped[i] = idx;
+	  remapped[i + d->nelt / 2] = idx;
+	}
+      /* Selector after: { 1, 1, 3, 3, 1, 1, 3, 3 }.  */
+    }
+  else if (loongarch_is_odd_extraction (d)
+	   || loongarch_is_even_extraction (d))
+    {
+      /* Odd extraction selector sample: E_V4DImode, { 1, 3, 5, 7 }
+	 Selector after: { 1, 3, 1, 3 }.
+	 Even extraction selector sample: E_V4DImode, { 0, 2, 4, 6 }
+	 Selector after: { 0, 2, 0, 2 }.  */
+      for (i = 0; i < d->nelt / 2; i += 1)
+	{
+	  idx = d->perm[i];
+	  remapped[i] = idx;
+	  remapped[i + d->nelt / 2] = idx;
+	}
+      /* Additional insn is required for correct result.  See codes below.  */
+      extract_ev_od = true;
+    }
+  else if (loongarch_is_extraction_permutation (d))
+    {
+      /* Selector sample: E_V8SImode, { 0, 1, 2, 3, 4, 5, 6, 7 }.  */
+      if (d->perm[0] == 0)
+	{
+	  for (i = 0; i < d->nelt / 2; i += 1)
+	    {
+	      remapped[i] = i;
+	      remapped[i + d->nelt / 2] = i;
+	    }
+	}
+      else
+	{
+	  /* { 8, 9, 10, 11, 12, 13, 14, 15 }.  */
+	  for (i = 0; i < d->nelt / 2; i += 1)
+	    {
+	      idx = i + d->nelt / 2;
+	      remapped[i] = idx;
+	      remapped[i + d->nelt / 2] = idx;
+	    }
+	}
+      /* Selector after: { 0, 1, 2, 3, 0, 1, 2, 3 }
+	 { 8, 9, 10, 11, 8, 9, 10, 11 }  */
+    }
+  else if (loongarch_is_center_extraction (d))
+    {
+      /* sample: E_V4DImode, { 2, 3, 4, 5 }
+	 In this condition, we can just copy high 128bit of op0 and low 128bit
+	 of op1 to the target register by using xvpermi.q insn.  */
+      if (!d->testing_p)
+	{
+	  emit_move_insn (d->target, d->op1);
+	  switch (d->vmode)
+	    {
+	      case E_V4DImode:
+		emit_insn (gen_lasx_xvpermi_q_v4di (d->target, d->target,
+						    d->op0, GEN_INT (0x21)));
+		break;
+	      case E_V4DFmode:
+		emit_insn (gen_lasx_xvpermi_q_v4df (d->target, d->target,
+						    d->op0, GEN_INT (0x21)));
+		break;
+	      case E_V8SImode:
+		emit_insn (gen_lasx_xvpermi_q_v8si (d->target, d->target,
+						    d->op0, GEN_INT (0x21)));
+		break;
+	      case E_V8SFmode:
+		emit_insn (gen_lasx_xvpermi_q_v8sf (d->target, d->target,
+						    d->op0, GEN_INT (0x21)));
+		break;
+	      case E_V16HImode:
+		emit_insn (gen_lasx_xvpermi_q_v16hi (d->target, d->target,
+						     d->op0, GEN_INT (0x21)));
+		break;
+	      case E_V32QImode:
+		emit_insn (gen_lasx_xvpermi_q_v32qi (d->target, d->target,
+						     d->op0, GEN_INT (0x21)));
+		break;
+	      default:
+		break;
+	    }
+	}
+      ok = true;
+      /* Finish the funtion directly.  */
+      goto expand_perm_const_2_end;
+    }
+  else if (loongarch_is_reversing_permutation (d))
+    {
+      /* Selector sample: E_V8SImode, { 7, 6, 5, 4, 3, 2, 1, 0 }
+	 one_vector_p == true  */
+      idx = d->nelt / 2 - 1;
+      for (i = 0; i < d->nelt / 2; i += 1)
+	{
+	  remapped[i] = idx;
+	  remapped[i + d->nelt / 2] = idx;
+	  idx -= 1;
+	}
+      /* Selector after: { 3, 2, 1, 0, 3, 2, 1, 0 }
+	 Additional insn will be generated to swap hi and lo 128bit of target
+	 register.  */
+      reverse_hi_lo = true;
+    }
+  else if (loongarch_is_di_misalign_extract (d)
+	   || loongarch_is_si_misalign_extract (d))
+    {
+      /* Selector Sample:
+	 DI misalign: E_V4DImode, { 1, 2, 3, 4 }
+	 SI misalign: E_V8SImode, { 1, 2, 3, 4, 5, 6, 7, 8 }  */
+      if (!d->testing_p)
+	{
+	  /* Copy original op0/op1 value to new temp register.
+	     In some cases, operand register may be used in multiple place, so
+	     we need new regiter instead modify original one, to avoid runtime
+	     crashing or wrong value after execution.  */
+	  use_alt_op = true;
+	  op1_alt = gen_reg_rtx (d->vmode);
+	  emit_move_insn (op1_alt, d->op1);
+
+	  /* Adjust op1 for selecting correct value in high 128bit of target
+	     register.
+	     op1: E_V4DImode, { 4, 5, 6, 7 } -> { 2, 3, 4, 5 }.  */
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, d->op0, 0);
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1, conv_op1,
+					      conv_op0, GEN_INT (0x21)));
+
+	  for (i = 0; i < d->nelt / 2; i += 1)
+	    {
+	      remapped[i] = d->perm[i];
+	      remapped[i + d->nelt / 2] = d->perm[i];
+	    }
+	  /* Selector after:
+	     DI misalign: { 1, 2, 1, 2 }
+	     SI misalign: { 1, 2, 3, 4, 1, 2, 3, 4 }  */
+	}
+    }
+  else if (loongarch_is_lasx_lowpart_interleave (d))
+    {
+      /* Elements from op0's low 18bit and op1's 128bit are inserted into
+	 target register alternately.
+	 sample: E_V4DImode, { 0, 4, 1, 5 }  */
+      if (!d->testing_p)
+	{
+	  /* Prepare temp register instead of modify original op.  */
+	  use_alt_op = true;
+	  op1_alt = gen_reg_rtx (d->vmode);
+	  op0_alt = gen_reg_rtx (d->vmode);
+	  emit_move_insn (op1_alt, d->op1);
+	  emit_move_insn (op0_alt, d->op0);
+
+	  /* Generate subreg for fitting into insn gen function.  */
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, op0_alt, 0);
+
+	  /* Adjust op value in temp register.
+	     op0 = {0,1,2,3}, op1 = {4,5,0,1}  */
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1, conv_op1,
+					      conv_op0, GEN_INT (0x02)));
+	  /* op0 = {0,1,4,5}, op1 = {4,5,0,1}  */
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op0, conv_op0,
+					      conv_op1, GEN_INT (0x01)));
+
+	  /* Remap indices in selector based on the location of index inside
+	     selector, and vector element numbers in current vector mode.  */
+
+	  /* Filling low 128bit of new selector.  */
+	  for (i = 0; i < d->nelt / 2; i += 1)
+	    {
+	      /* value in odd-indexed slot of low 128bit part of selector
+		 vector.  */
+	      remapped[i] = i % 2 != 0 ? d->perm[i] - d->nelt / 2 : d->perm[i];
+	    }
+	  /* Then filling the high 128bit.  */
+	  for (i = d->nelt / 2; i < d->nelt; i += 1)
+	    {
+	      /* value in even-indexed slot of high 128bit part of
+		 selector vector.  */
+	      remapped[i] = i % 2 == 0
+		? d->perm[i] + (d->nelt / 2) * 3 : d->perm[i];
+	    }
+	}
+    }
+  else if (loongarch_is_lasx_lowpart_interleave_2 (d))
+    {
+      /* Special lowpart interleave case in V32QI vector mode.  It does the same
+	 thing as we can see in if branch that above this line.
+	 Selector sample: E_V32QImode,
+	 {0, 1, 2, 3, 4, 5, 6, 7, 32, 33, 34, 35, 36, 37, 38, 39, 8,
+	 9, 10, 11, 12, 13, 14, 15, 40, 41, 42, 43, 44, 45, 46, 47}  */
+      if (!d->testing_p)
+	{
+	  /* Solution for this case in very simple - covert op into V4DI mode,
+	     and do same thing as previous if branch.  */
+	  op1_alt = gen_reg_rtx (d->vmode);
+	  op0_alt = gen_reg_rtx (d->vmode);
+	  emit_move_insn (op1_alt, d->op1);
+	  emit_move_insn (op0_alt, d->op0);
+
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, op0_alt, 0);
+	  rtx conv_target = gen_rtx_SUBREG (E_V4DImode, d->target, 0);
+
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1, conv_op1,
+					      conv_op0, GEN_INT (0x02)));
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op0, conv_op0,
+					      conv_op1, GEN_INT (0x01)));
+	  remapped[0] = 0;
+	  remapped[1] = 4;
+	  remapped[2] = 1;
+	  remapped[3] = 5;
+
+	  for (i = 0; i < d->nelt; i += 1)
+	    {
+	      rperm[i] = GEN_INT (remapped[i]);
+	    }
+
+	  sel = gen_rtx_CONST_VECTOR (E_V4DImode, gen_rtvec_v (4, rperm));
+	  sel = force_reg (E_V4DImode, sel);
+	  emit_insn (gen_lasx_xvshuf_d (conv_target, sel,
+					conv_op1, conv_op0));
+	}
+
+      ok = true;
+      goto expand_perm_const_2_end;
+    }
+  else if (loongarch_is_lasx_lowpart_extract (d))
+    {
+      /* Copy op0's low 128bit to target's low 128bit, and copy op1's low
+	 128bit to target's high 128bit.
+	 Selector sample: E_V4DImode, { 0, 1, 4 ,5 }  */
+      if (!d->testing_p)
+	{
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, d->op1, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, d->op0, 0);
+	  rtx conv_target = gen_rtx_SUBREG (E_V4DImode, d->target, 0);
+
+	  /* We can achieve the expectation by using sinple xvpermi.q insn.  */
+	  emit_move_insn (conv_target, conv_op1);
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_target, conv_target,
+					      conv_op0, GEN_INT (0x20)));
+	}
+
+      ok = true;
+      goto expand_perm_const_2_end;
+    }
+  else if (loongarch_is_lasx_highpart_interleave (d))
+    {
+      /* Similar to lowpart interleave, elements from op0's high 128bit and
+	 op1's high 128bit are inserted into target regiter alternately.
+	 Selector sample: E_V8SImode, { 4, 12, 5, 13, 6, 14, 7, 15 }  */
+      if (!d->testing_p)
+	{
+	  /* Prepare temp op register.  */
+	  use_alt_op = true;
+	  op1_alt = gen_reg_rtx (d->vmode);
+	  op0_alt = gen_reg_rtx (d->vmode);
+	  emit_move_insn (op1_alt, d->op1);
+	  emit_move_insn (op0_alt, d->op0);
+
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, op0_alt, 0);
+	  /* Adjust op value in temp regiter.
+	     op0 = { 0, 1, 2, 3 }, op1 = { 6, 7, 2, 3 }  */
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1, conv_op1,
+					      conv_op0, GEN_INT (0x13)));
+	  /* op0 = { 2, 3, 6, 7 }, op1 = { 6, 7, 2, 3 }  */
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op0, conv_op0,
+					      conv_op1, GEN_INT (0x01)));
+	  /* Remap indices in selector based on the location of index inside
+	     selector, and vector element numbers in current vector mode.  */
+
+	  /* Filling low 128bit of new selector.  */
+	 for (i = 0; i < d->nelt / 2; i += 1)
+	   {
+	     /* value in even-indexed slot of low 128bit part of selector
+		vector.  */
+	     remapped[i] = i % 2 == 0 ? d->perm[i] - d->nelt / 2 : d->perm[i];
+	   }
+	  /* Then filling the high 128bit.  */
+	 for (i = d->nelt / 2; i < d->nelt; i += 1)
+	   {
+	     /* value in odd-indexed slot of high 128bit part of selector
+		vector.  */
+	      remapped[i] = i % 2 != 0
+		? d->perm[i] - (d->nelt / 2) * 3 : d->perm[i];
+	   }
+	}
+    }
+  else if (loongarch_is_lasx_highpart_interleave_2 (d))
+    {
+      /* Special highpart interleave case in V32QI vector mode.  It does the
+	 same thing as the normal version above.
+	 Selector sample: E_V32QImode,
+	 {16, 17, 18, 19, 20, 21, 22, 23, 48, 49, 50, 51, 52, 53, 54, 55,
+	 24, 25, 26, 27, 28, 29, 30, 31, 56, 57, 58, 59, 60, 61, 62, 63}
+      */
+      if (!d->testing_p)
+	{
+	  /* Convert op into V4DImode and do the things.  */
+	  op1_alt = gen_reg_rtx (d->vmode);
+	  op0_alt = gen_reg_rtx (d->vmode);
+	  emit_move_insn (op1_alt, d->op1);
+	  emit_move_insn (op0_alt, d->op0);
+
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, op0_alt, 0);
+	  rtx conv_target = gen_rtx_SUBREG (E_V4DImode, d->target, 0);
+
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1, conv_op1,
+					      conv_op0, GEN_INT (0x13)));
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op0, conv_op0,
+					      conv_op1, GEN_INT (0x01)));
+	  remapped[0] = 2;
+	  remapped[1] = 6;
+	  remapped[2] = 3;
+	  remapped[3] = 7;
+
+	  for (i = 0; i < d->nelt; i += 1)
+	    {
+	      rperm[i] = GEN_INT (remapped[i]);
+	    }
+
+	  sel = gen_rtx_CONST_VECTOR (E_V4DImode, gen_rtvec_v (4, rperm));
+	  sel = force_reg (E_V4DImode, sel);
+	  emit_insn (gen_lasx_xvshuf_d (conv_target, sel,
+					conv_op1, conv_op0));
+	}
+
+	ok = true;
+	goto expand_perm_const_2_end;
+    }
+  else if (loongarch_is_elem_duplicate (d))
+    {
+      /* Brocast single element (from op0 or op1) to all slot of target
+	 register.
+	 Selector sample:E_V8SImode, { 2, 2, 2, 2, 2, 2, 2, 2 }  */
+      if (!d->testing_p)
+	{
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, d->op1, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, d->op0, 0);
+	  rtx temp_reg = gen_reg_rtx (d->vmode);
+	  rtx conv_temp = gen_rtx_SUBREG (E_V4DImode, temp_reg, 0);
+
+	  emit_move_insn (temp_reg, d->op0);
+
+	  idx = d->perm[0];
+	  /* We will use xvrepl128vei.* insn to achieve the result, but we need
+	     to make the high/low 128bit has the same contents that contain the
+	     value that we need to broardcast, because xvrepl128vei does the
+	     broardcast job from every 128bit of source register to
+	     corresponded part of target register! (A deep sigh.)  */
+	  if (/*idx >= 0 &&*/ idx < d->nelt / 2)
+	    {
+	      emit_insn (gen_lasx_xvpermi_q_v4di (conv_temp, conv_temp,
+						  conv_op0, GEN_INT (0x0)));
+	    }
+	  else if (idx >= d->nelt / 2 && idx < d->nelt)
+	    {
+	      emit_insn (gen_lasx_xvpermi_q_v4di (conv_temp, conv_temp,
+						  conv_op0, GEN_INT (0x11)));
+	      idx -= d->nelt / 2;
+	    }
+	  else if (idx >= d->nelt && idx < (d->nelt + d->nelt / 2))
+	    {
+	      emit_insn (gen_lasx_xvpermi_q_v4di (conv_temp, conv_temp,
+						  conv_op1, GEN_INT (0x0)));
+	    }
+	  else if (idx >= (d->nelt + d->nelt / 2) && idx < d->nelt * 2)
+	    {
+	      emit_insn (gen_lasx_xvpermi_q_v4di (conv_temp, conv_temp,
+						  conv_op1, GEN_INT (0x11)));
+	      idx -= d->nelt / 2;
+	    }
+
+	  /* Then we can finally generate this insn.  */
+	  switch (d->vmode)
+	    {
+	    case E_V4DImode:
+	      emit_insn (gen_lasx_xvrepl128vei_d (d->target, temp_reg,
+						  GEN_INT (idx)));
+	      break;
+	    case E_V4DFmode:
+	      emit_insn (gen_lasx_xvrepl128vei_d_f (d->target, temp_reg,
+						    GEN_INT (idx)));
+	      break;
+	    case E_V8SImode:
+	      emit_insn (gen_lasx_xvrepl128vei_w (d->target, temp_reg,
+						  GEN_INT (idx)));
+	      break;
+	    case E_V8SFmode:
+	      emit_insn (gen_lasx_xvrepl128vei_w_f (d->target, temp_reg,
+						    GEN_INT (idx)));
+	      break;
+	    case E_V16HImode:
+	      emit_insn (gen_lasx_xvrepl128vei_h (d->target, temp_reg,
+						  GEN_INT (idx)));
+	      break;
+	    case E_V32QImode:
+	      emit_insn (gen_lasx_xvrepl128vei_b (d->target, temp_reg,
+						  GEN_INT (idx)));
+	      break;
+	    default:
+	      gcc_unreachable ();
+	      break;
+	    }
+
+	  /* finish func directly.  */
+	  ok = true;
+	  goto expand_perm_const_2_end;
+	}
+    }
+  else if (loongarch_is_op_reverse_perm (d))
+    {
+      /* reverse high 128bit and low 128bit in op0.
+	 Selector sample: E_V4DFmode, { 2, 3, 0, 1 }
+	 Use xvpermi.q for doing this job.  */
+      if (!d->testing_p)
+	{
+	  if (d->vmode == E_V4DImode)
+	    {
+	      emit_insn (gen_lasx_xvpermi_q_v4di (d->target, d->target, d->op0,
+						  GEN_INT (0x01)));
+	    }
+	  else if (d->vmode == E_V4DFmode)
+	    {
+	      emit_insn (gen_lasx_xvpermi_q_v4df (d->target, d->target, d->op0,
+						  GEN_INT (0x01)));
+	    }
+	  else
+	    {
+	      gcc_unreachable ();
+	    }
+	}
+
+      ok = true;
+      goto expand_perm_const_2_end;
+    }
+  else if (loongarch_is_single_op_perm (d))
+    {
+      /* Permutation that only select elements from op0.  */
+      if (!d->testing_p)
+	{
+	  /* Prepare temp register instead of modify original op.  */
+	  use_alt_op = true;
+	  op0_alt = gen_reg_rtx (d->vmode);
+	  op1_alt = gen_reg_rtx (d->vmode);
+
+	  emit_move_insn (op0_alt, d->op0);
+	  emit_move_insn (op1_alt, d->op1);
+
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, d->op0, 0);
+	  rtx conv_op0a = gen_rtx_SUBREG (E_V4DImode, op0_alt, 0);
+	  rtx conv_op1a = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+
+	  /* Duplicate op0's low 128bit in op0, then duplicate high 128bit
+	     in op1.  After this, xvshuf.* insn's selector argument can
+	     access all elements we need for correct permutation result.  */
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op0a, conv_op0a, conv_op0,
+					      GEN_INT (0x00)));
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1a, conv_op1a, conv_op0,
+					      GEN_INT (0x11)));
+
+	  /* In this case, there's no need to remap selector's indices.  */
+	  for (i = 0; i < d->nelt; i += 1)
+	    {
+	      remapped[i] = d->perm[i];
+	    }
+	}
+    }
+  else if (loongarch_is_divisible_perm (d))
+    {
+      /* Divisible perm:
+	 Low 128bit of selector only selects elements of op0,
+	 and high 128bit of selector only selects elements of op1.  */
+
+      if (!d->testing_p)
+	{
+	  /* Prepare temp register instead of modify original op.  */
+	  use_alt_op = true;
+	  op0_alt = gen_reg_rtx (d->vmode);
+	  op1_alt = gen_reg_rtx (d->vmode);
+
+	  emit_move_insn (op0_alt, d->op0);
+	  emit_move_insn (op1_alt, d->op1);
+
+	  rtx conv_op0a = gen_rtx_SUBREG (E_V4DImode, op0_alt, 0);
+	  rtx conv_op1a = gen_rtx_SUBREG (E_V4DImode, op1_alt, 0);
+	  rtx conv_op0 = gen_rtx_SUBREG (E_V4DImode, d->op0, 0);
+	  rtx conv_op1 = gen_rtx_SUBREG (E_V4DImode, d->op1, 0);
+
+	  /* Reorganize op0's hi/lo 128bit and op1's hi/lo 128bit, to make sure
+	     that selector's low 128bit can access all op0's elements, and
+	     selector's high 128bit can access all op1's elements.  */
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op0a, conv_op0a, conv_op1,
+					      GEN_INT (0x02)));
+	  emit_insn (gen_lasx_xvpermi_q_v4di (conv_op1a, conv_op1a, conv_op0,
+					      GEN_INT (0x31)));
+
+	  /* No need to modify indices.  */
+	  for (i = 0; i < d->nelt;i += 1)
+	    {
+	      remapped[i] = d->perm[i];
+	    }
+	}
+    }
+  else if (loongarch_is_triple_stride_extract (d))
+    {
+      /* Selector sample: E_V4DFmode, { 1, 4, 7, 0 }.  */
+      if (!d->testing_p)
+	{
+	  /* Resolve it with brute force modification.  */
+	  remapped[0] = 1;
+	  remapped[1] = 2;
+	  remapped[2] = 3;
+	  remapped[3] = 0;
+	}
+    }
+  else
+    {
+      /* When all of the detections above are failed, we will try last
+	 strategy.
+	 The for loop tries to detect following rules based on indices' value,
+	 its position inside of selector vector ,and strange behavior of
+	 xvshuf.* insn; Then we take corresponding action. (Replace with new
+	 value, or give up whole permutation expansion.)  */
+      for (i = 0; i < d->nelt; i += 1)
+	{
+	  /* % (2 * d->nelt)  */
+	  idx = d->perm[i];
+
+	  /* if index is located in low 128bit of selector vector.  */
+	  if (i < d->nelt / 2)
+	    {
+	      /* Fail case 1: index tries to reach element that located in op0's
+		 high 128bit.  */
+	      if (idx >= d->nelt / 2 && idx < d->nelt)
+		{
+		  goto expand_perm_const_2_end;
+		}
+	      /* Fail case 2: index tries to reach element that located in
+		 op1's high 128bit.  */
+	      if (idx >= (d->nelt + d->nelt / 2))
+		{
+		  goto expand_perm_const_2_end;
+		}
+
+	      /* Success case: index tries to reach elements that located in
+		 op1's low 128bit.  Apply - (nelt / 2) offset to original
+		 value.  */
+	      if (idx >= d->nelt && idx < (d->nelt + d->nelt / 2))
+		{
+		  idx -= d->nelt / 2;
+		}
+	    }
+	  /* if index is located in high 128bit of selector vector.  */
+	  else
+	    {
+	      /* Fail case 1: index tries to reach element that located in
+		 op1's low 128bit.  */
+	      if (idx >= d->nelt && idx < (d->nelt + d->nelt / 2))
+		{
+		  goto expand_perm_const_2_end;
+		}
+	      /* Fail case 2: index tries to reach element that located in
+		 op0's low 128bit.  */
+	      if (idx < (d->nelt / 2))
+		{
+		  goto expand_perm_const_2_end;
+		}
+	      /* Success case: index tries to reach element that located in
+		 op0's high 128bit.  */
+	      if (idx >= d->nelt / 2 && idx < d->nelt)
+		{
+		  idx -= d->nelt / 2;
+		}
+	    }
+	  /* No need to process other case that we did not mentioned.  */
+
+	  /* Assign with original or processed value.  */
+	  remapped[i] = idx;
+	}
+    }
+
+  ok = true;
+  /* If testing_p is true, compiler is trying to figure out that backend can
+     handle this permutation, but doesn't want to generate actual insn.  So
+     if true, exit directly.  */
+  if (d->testing_p)
+    {
+      goto expand_perm_const_2_end;
+    }
+
+  /* Convert remapped selector array to RTL array.  */
+  for (i = 0; i < d->nelt; i += 1)
+    {
+      rperm[i] = GEN_INT (remapped[i]);
+    }
+
+  /* Copy selector vector from memory to vector regiter for later insn gen
+     function.
+     If vector's element in floating point value, we cannot fit selector
+     argument into insn gen function directly, because of the insn template
+     definition.  As a solution, generate a integral mode subreg of target,
+     then copy selector vector (that is in integral mode) to this subreg.  */
+  switch (d->vmode)
+    {
+    case E_V4DFmode:
+      sel = gen_rtx_CONST_VECTOR (E_V4DImode, gen_rtvec_v (d->nelt, rperm));
+      tmp = gen_rtx_SUBREG (E_V4DImode, d->target, 0);
+      emit_move_insn (tmp, sel);
+      break;
+    case E_V8SFmode:
+      sel = gen_rtx_CONST_VECTOR (E_V8SImode, gen_rtvec_v (d->nelt, rperm));
+      tmp = gen_rtx_SUBREG (E_V8SImode, d->target, 0);
+      emit_move_insn (tmp, sel);
+      break;
+    default:
+      sel = gen_rtx_CONST_VECTOR (d->vmode, gen_rtvec_v (d->nelt, rperm));
+      emit_move_insn (d->target, sel);
+      break;
+    }
+
+  target = d->target;
+  /* If temp op registers are requested in previous if branch, then use temp
+     register intead of original one.  */
+  if (use_alt_op)
+    {
+      op0 = op0_alt != NULL_RTX ? op0_alt : d->op0;
+      op1 = op1_alt != NULL_RTX ? op1_alt : d->op1;
+    }
+  else
+    {
+      op0 = d->op0;
+      op1 = d->one_vector_p ? d->op0 : d->op1;
+    }
+
+  /* We FINALLY can generate xvshuf.* insn.  */
+  switch (d->vmode)
+    {
+    case E_V4DFmode:
+      emit_insn (gen_lasx_xvshuf_d_f (target, target, op1, op0));
+      break;
+    case E_V4DImode:
+      emit_insn (gen_lasx_xvshuf_d (target, target, op1, op0));
+      break;
+    case E_V8SFmode:
+      emit_insn (gen_lasx_xvshuf_w_f (target, target, op1, op0));
+      break;
+    case E_V8SImode:
+      emit_insn (gen_lasx_xvshuf_w (target, target, op1, op0));
+      break;
+    case E_V16HImode:
+      emit_insn (gen_lasx_xvshuf_h (target, target, op1, op0));
+      break;
+    case E_V32QImode:
+      emit_insn (gen_lasx_xvshuf_b (target, op1, op0, target));
+      break;
+    default:
+      gcc_unreachable ();
+      break;
+    }
+
+  /* Extra insn for swapping the hi/lo 128bit of target vector register.  */
+  if (reverse_hi_lo)
+    {
+      switch (d->vmode)
+	{
+	case E_V4DFmode:
+	  emit_insn (gen_lasx_xvpermi_q_v4df (d->target, d->target,
+					      d->target, GEN_INT (0x1)));
+	  break;
+	case E_V4DImode:
+	  emit_insn (gen_lasx_xvpermi_q_v4di (d->target, d->target,
+					      d->target, GEN_INT (0x1)));
+	  break;
+	case E_V8SFmode:
+	  emit_insn (gen_lasx_xvpermi_q_v8sf (d->target, d->target,
+					      d->target, GEN_INT (0x1)));
+	  break;
+	case E_V8SImode:
+	  emit_insn (gen_lasx_xvpermi_q_v8si (d->target, d->target,
+					      d->target, GEN_INT (0x1)));
+	  break;
+	case E_V16HImode:
+	  emit_insn (gen_lasx_xvpermi_q_v16hi (d->target, d->target,
+					       d->target, GEN_INT (0x1)));
+	  break;
+	case E_V32QImode:
+	  emit_insn (gen_lasx_xvpermi_q_v32qi (d->target, d->target,
+					       d->target, GEN_INT (0x1)));
+	  break;
+	default:
+	  break;
+	}
+    }
+  /* Extra insn required by odd/even extraction.  Swapping the second and third
+     64bit in target vector register.  */
+  else if (extract_ev_od)
+    {
+      rtx converted = gen_rtx_SUBREG (E_V4DImode, d->target, 0);
+      emit_insn (gen_lasx_xvpermi_d_v4di (converted, converted,
+					  GEN_INT (0xD8)));
+    }
+
+expand_perm_const_2_end:
+  return ok;
 }
 
 /* Implement TARGET_VECTORIZE_VEC_PERM_CONST.  */
@@ -7807,7 +9651,7 @@ loongarch_sched_reassociation_width (unsigned int opc, machine_mode mode)
     case CPU_LOONGARCH64:
     case CPU_LA464:
       /* Vector part.  */
-      if (LSX_SUPPORTED_MODE_P (mode))
+      if (LSX_SUPPORTED_MODE_P (mode) || LASX_SUPPORTED_MODE_P (mode))
 	{
 	  /* Integer vector instructions execute in FP unit.
 	     The width of integer/float-point vector instructions is 3.  */
@@ -7847,6 +9691,44 @@ loongarch_expand_vector_extract (rtx target, rtx vec, int elt)
     case E_V16QImode:
       break;
 
+    case E_V32QImode:
+      if (TARGET_LASX)
+	{
+	  if (elt >= 16)
+	    {
+	      tmp = gen_reg_rtx (V32QImode);
+	      emit_insn (gen_lasx_xvpermi_d_v32qi (tmp, vec, GEN_INT (0xe)));
+	      loongarch_expand_vector_extract (target,
+					       gen_lowpart (V16QImode, tmp),
+					       elt & 15);
+	    }
+	  else
+	    loongarch_expand_vector_extract (target,
+					     gen_lowpart (V16QImode, vec),
+					     elt & 15);
+	  return;
+	}
+      break;
+
+    case E_V16HImode:
+      if (TARGET_LASX)
+	{
+	  if (elt >= 8)
+	    {
+	      tmp = gen_reg_rtx (V16HImode);
+	      emit_insn (gen_lasx_xvpermi_d_v16hi (tmp, vec, GEN_INT (0xe)));
+	      loongarch_expand_vector_extract (target,
+					       gen_lowpart (V8HImode, tmp),
+					       elt & 7);
+	    }
+	  else
+	    loongarch_expand_vector_extract (target,
+					     gen_lowpart (V8HImode, vec),
+					     elt & 7);
+	  return;
+	}
+      break;
+
     default:
       break;
     }
@@ -7885,6 +9767,31 @@ emit_reduc_half (rtx dest, rtx src, int i)
     case E_V2DFmode:
       tem = gen_lsx_vbsrl_d_f (dest, src, GEN_INT (8));
       break;
+    case E_V8SFmode:
+      if (i == 256)
+	tem = gen_lasx_xvpermi_d_v8sf (dest, src, GEN_INT (0xe));
+      else
+	tem = gen_lasx_xvshuf4i_w_f (dest, src,
+				     GEN_INT (i == 128 ? 2 + (3 << 2) : 1));
+      break;
+    case E_V4DFmode:
+      if (i == 256)
+	tem = gen_lasx_xvpermi_d_v4df (dest, src, GEN_INT (0xe));
+      else
+	tem = gen_lasx_xvpermi_d_v4df (dest, src, const1_rtx);
+      break;
+    case E_V32QImode:
+    case E_V16HImode:
+    case E_V8SImode:
+    case E_V4DImode:
+      d = gen_reg_rtx (V4DImode);
+      if (i == 256)
+	tem = gen_lasx_xvpermi_d_v4di (d, gen_lowpart (V4DImode, src),
+				       GEN_INT (0xe));
+      else
+	tem = gen_lasx_xvbsrl_d (d, gen_lowpart (V4DImode, src),
+				 GEN_INT (i/16));
+      break;
     case E_V16QImode:
     case E_V8HImode:
     case E_V4SImode:
@@ -7932,10 +9839,57 @@ loongarch_expand_vec_unpack (rtx operands[2], bool unsigned_p, bool high_p)
 {
   machine_mode imode = GET_MODE (operands[1]);
   rtx (*unpack) (rtx, rtx, rtx);
+  rtx (*extend) (rtx, rtx);
   rtx (*cmpFunc) (rtx, rtx, rtx);
+  rtx (*swap_hi_lo) (rtx, rtx, rtx, rtx);
   rtx tmp, dest;
 
-  if (ISA_HAS_LSX)
+  if (ISA_HAS_LASX && GET_MODE_SIZE (imode) == 32)
+    {
+      switch (imode)
+	{
+	case E_V8SImode:
+	  if (unsigned_p)
+	    extend = gen_lasx_vext2xv_du_wu;
+	  else
+	    extend = gen_lasx_vext2xv_d_w;
+	  swap_hi_lo = gen_lasx_xvpermi_q_v8si;
+	  break;
+
+	case E_V16HImode:
+	  if (unsigned_p)
+	    extend = gen_lasx_vext2xv_wu_hu;
+	  else
+	    extend = gen_lasx_vext2xv_w_h;
+	  swap_hi_lo = gen_lasx_xvpermi_q_v16hi;
+	  break;
+
+	case E_V32QImode:
+	  if (unsigned_p)
+	    extend = gen_lasx_vext2xv_hu_bu;
+	  else
+	    extend = gen_lasx_vext2xv_h_b;
+	  swap_hi_lo = gen_lasx_xvpermi_q_v32qi;
+	  break;
+
+	default:
+	  gcc_unreachable ();
+	  break;
+	}
+
+      if (high_p)
+	{
+	  tmp = gen_reg_rtx (imode);
+	  emit_insn (swap_hi_lo (tmp, tmp, operands[1], const1_rtx));
+	  emit_insn (extend (operands[0], tmp));
+	  return;
+	}
+
+      emit_insn (extend (operands[0], operands[1]));
+      return;
+
+    }
+  else if (ISA_HAS_LSX)
     {
       switch (imode)
 	{
@@ -8036,8 +9990,17 @@ loongarch_gen_const_int_vector_shuffle (machine_mode mode, int val)
   return gen_rtx_PARALLEL (VOIDmode, gen_rtvec_v (nunits, elts));
 }
 
+
 /* Expand a vector initialization.  */
 
+void
+loongarch_expand_vector_group_init (rtx target, rtx vals)
+{
+  rtx ops[2] = { XVECEXP (vals, 0, 0), XVECEXP (vals, 0, 1) };
+  emit_insn (gen_rtx_SET (target, gen_rtx_VEC_CONCAT (E_V32QImode, ops[0],
+						      ops[1])));
+}
+
 void
 loongarch_expand_vector_init (rtx target, rtx vals)
 {
@@ -8057,6 +10020,285 @@ loongarch_expand_vector_init (rtx target, rtx vals)
 	all_same = false;
     }
 
+  if (ISA_HAS_LASX && GET_MODE_SIZE (vmode) == 32)
+    {
+      if (all_same)
+	{
+	  rtx same = XVECEXP (vals, 0, 0);
+	  rtx temp, temp2;
+
+	  if (CONST_INT_P (same) && nvar == 0
+	      && loongarch_signed_immediate_p (INTVAL (same), 10, 0))
+	    {
+	      switch (vmode)
+		{
+		case E_V32QImode:
+		case E_V16HImode:
+		case E_V8SImode:
+		case E_V4DImode:
+		  temp = gen_rtx_CONST_VECTOR (vmode, XVEC (vals, 0));
+		  emit_move_insn (target, temp);
+		  return;
+
+		default:
+		  gcc_unreachable ();
+		}
+	    }
+
+	  temp = gen_reg_rtx (imode);
+	  if (imode == GET_MODE (same))
+	    temp2 = same;
+	  else if (GET_MODE_SIZE (imode) >= UNITS_PER_WORD)
+	    {
+	      if (GET_CODE (same) == MEM)
+		{
+		  rtx reg_tmp = gen_reg_rtx (GET_MODE (same));
+		  loongarch_emit_move (reg_tmp, same);
+		  temp2 = simplify_gen_subreg (imode, reg_tmp,
+					       GET_MODE (reg_tmp), 0);
+		}
+	      else
+		temp2 = simplify_gen_subreg (imode, same,
+					     GET_MODE (same), 0);
+	    }
+	  else
+	    {
+	      if (GET_CODE (same) == MEM)
+		{
+		  rtx reg_tmp = gen_reg_rtx (GET_MODE (same));
+		  loongarch_emit_move (reg_tmp, same);
+		  temp2 = lowpart_subreg (imode, reg_tmp,
+					  GET_MODE (reg_tmp));
+		}
+	      else
+		temp2 = lowpart_subreg (imode, same, GET_MODE (same));
+	    }
+	  emit_move_insn (temp, temp2);
+
+	  switch (vmode)
+	    {
+	    case E_V32QImode:
+	    case E_V16HImode:
+	    case E_V8SImode:
+	    case E_V4DImode:
+	      loongarch_emit_move (target,
+				   gen_rtx_VEC_DUPLICATE (vmode, temp));
+	      break;
+
+	    case E_V8SFmode:
+	      emit_insn (gen_lasx_xvreplve0_w_f_scalar (target, temp));
+	      break;
+
+	    case E_V4DFmode:
+	      emit_insn (gen_lasx_xvreplve0_d_f_scalar (target, temp));
+	      break;
+
+	    default:
+	      gcc_unreachable ();
+	    }
+	}
+      else
+	{
+	  rtvec vec = shallow_copy_rtvec (XVEC (vals, 0));
+
+	  for (i = 0; i < nelt; ++i)
+	    RTVEC_ELT (vec, i) = CONST0_RTX (imode);
+
+	  emit_move_insn (target, gen_rtx_CONST_VECTOR (vmode, vec));
+
+	  machine_mode half_mode = VOIDmode;
+	  rtx target_hi, target_lo;
+
+	  switch (vmode)
+	    {
+	    case E_V32QImode:
+	      half_mode=E_V16QImode;
+	      target_hi = gen_reg_rtx (half_mode);
+	      target_lo = gen_reg_rtx (half_mode);
+	      for (i = 0; i < nelt/2; ++i)
+		{
+		  rtx temp_hi = gen_reg_rtx (imode);
+		  rtx temp_lo = gen_reg_rtx (imode);
+		  emit_move_insn (temp_hi, XVECEXP (vals, 0, i+nelt/2));
+		  emit_move_insn (temp_lo, XVECEXP (vals, 0, i));
+		  if (i == 0)
+		    {
+		      emit_insn (gen_lsx_vreplvei_b_scalar (target_hi,
+							    temp_hi));
+		      emit_insn (gen_lsx_vreplvei_b_scalar (target_lo,
+							    temp_lo));
+		    }
+		  else
+		    {
+		      emit_insn (gen_vec_setv16qi (target_hi, temp_hi,
+						   GEN_INT (i)));
+		      emit_insn (gen_vec_setv16qi (target_lo, temp_lo,
+						   GEN_INT (i)));
+		    }
+		}
+	      emit_insn (gen_rtx_SET (target,
+				      gen_rtx_VEC_CONCAT (vmode, target_hi,
+							  target_lo)));
+	      break;
+
+	    case E_V16HImode:
+	      half_mode=E_V8HImode;
+	      target_hi = gen_reg_rtx (half_mode);
+	      target_lo = gen_reg_rtx (half_mode);
+	      for (i = 0; i < nelt/2; ++i)
+		{
+		  rtx temp_hi = gen_reg_rtx (imode);
+		  rtx temp_lo = gen_reg_rtx (imode);
+		  emit_move_insn (temp_hi, XVECEXP (vals, 0, i+nelt/2));
+		  emit_move_insn (temp_lo, XVECEXP (vals, 0, i));
+		  if (i == 0)
+		    {
+		      emit_insn (gen_lsx_vreplvei_h_scalar (target_hi,
+							    temp_hi));
+		      emit_insn (gen_lsx_vreplvei_h_scalar (target_lo,
+							    temp_lo));
+		    }
+		  else
+		    {
+		      emit_insn (gen_vec_setv8hi (target_hi, temp_hi,
+						  GEN_INT (i)));
+		      emit_insn (gen_vec_setv8hi (target_lo, temp_lo,
+						  GEN_INT (i)));
+		    }
+		}
+	      emit_insn (gen_rtx_SET (target,
+				      gen_rtx_VEC_CONCAT (vmode, target_hi,
+							  target_lo)));
+	      break;
+
+	    case E_V8SImode:
+	      half_mode=V4SImode;
+	      target_hi = gen_reg_rtx (half_mode);
+	      target_lo = gen_reg_rtx (half_mode);
+	      for (i = 0; i < nelt/2; ++i)
+		{
+		  rtx temp_hi = gen_reg_rtx (imode);
+		  rtx temp_lo = gen_reg_rtx (imode);
+		  emit_move_insn (temp_hi, XVECEXP (vals, 0, i+nelt/2));
+		  emit_move_insn (temp_lo, XVECEXP (vals, 0, i));
+		  if (i == 0)
+		    {
+		      emit_insn (gen_lsx_vreplvei_w_scalar (target_hi,
+							    temp_hi));
+		      emit_insn (gen_lsx_vreplvei_w_scalar (target_lo,
+							    temp_lo));
+		    }
+		  else
+		    {
+		      emit_insn (gen_vec_setv4si (target_hi, temp_hi,
+						  GEN_INT (i)));
+		      emit_insn (gen_vec_setv4si (target_lo, temp_lo,
+						  GEN_INT (i)));
+		    }
+		}
+	      emit_insn (gen_rtx_SET (target,
+				      gen_rtx_VEC_CONCAT (vmode, target_hi,
+							  target_lo)));
+	      break;
+
+	    case E_V4DImode:
+	      half_mode=E_V2DImode;
+	      target_hi = gen_reg_rtx (half_mode);
+	      target_lo = gen_reg_rtx (half_mode);
+	      for (i = 0; i < nelt/2; ++i)
+		{
+		  rtx temp_hi = gen_reg_rtx (imode);
+		  rtx temp_lo = gen_reg_rtx (imode);
+		  emit_move_insn (temp_hi, XVECEXP (vals, 0, i+nelt/2));
+		  emit_move_insn (temp_lo, XVECEXP (vals, 0, i));
+		  if (i == 0)
+		    {
+		      emit_insn (gen_lsx_vreplvei_d_scalar (target_hi,
+							    temp_hi));
+		      emit_insn (gen_lsx_vreplvei_d_scalar (target_lo,
+							    temp_lo));
+		    }
+		  else
+		    {
+		      emit_insn (gen_vec_setv2di (target_hi, temp_hi,
+						  GEN_INT (i)));
+		      emit_insn (gen_vec_setv2di (target_lo, temp_lo,
+						  GEN_INT (i)));
+		    }
+		}
+	      emit_insn (gen_rtx_SET (target,
+				      gen_rtx_VEC_CONCAT (vmode, target_hi,
+							  target_lo)));
+	      break;
+
+	    case E_V8SFmode:
+	      half_mode=E_V4SFmode;
+	      target_hi = gen_reg_rtx (half_mode);
+	      target_lo = gen_reg_rtx (half_mode);
+	      for (i = 0; i < nelt/2; ++i)
+		{
+		  rtx temp_hi = gen_reg_rtx (imode);
+		  rtx temp_lo = gen_reg_rtx (imode);
+		  emit_move_insn (temp_hi, XVECEXP (vals, 0, i+nelt/2));
+		  emit_move_insn (temp_lo, XVECEXP (vals, 0, i));
+		  if (i == 0)
+		    {
+		      emit_insn (gen_lsx_vreplvei_w_f_scalar (target_hi,
+							      temp_hi));
+		      emit_insn (gen_lsx_vreplvei_w_f_scalar (target_lo,
+							      temp_lo));
+		    }
+		  else
+		    {
+		      emit_insn (gen_vec_setv4sf (target_hi, temp_hi,
+						  GEN_INT (i)));
+		      emit_insn (gen_vec_setv4sf (target_lo, temp_lo,
+						  GEN_INT (i)));
+		    }
+		}
+	      emit_insn (gen_rtx_SET (target,
+				      gen_rtx_VEC_CONCAT (vmode, target_hi,
+							  target_lo)));
+	      break;
+
+	    case E_V4DFmode:
+	      half_mode=E_V2DFmode;
+	      target_hi = gen_reg_rtx (half_mode);
+	      target_lo = gen_reg_rtx (half_mode);
+	      for (i = 0; i < nelt/2; ++i)
+		{
+		  rtx temp_hi = gen_reg_rtx (imode);
+		  rtx temp_lo = gen_reg_rtx (imode);
+		  emit_move_insn (temp_hi, XVECEXP (vals, 0, i+nelt/2));
+		  emit_move_insn (temp_lo, XVECEXP (vals, 0, i));
+		  if (i == 0)
+		    {
+		      emit_insn (gen_lsx_vreplvei_d_f_scalar (target_hi,
+							      temp_hi));
+		      emit_insn (gen_lsx_vreplvei_d_f_scalar (target_lo,
+							      temp_lo));
+		    }
+		  else
+		    {
+		      emit_insn (gen_vec_setv2df (target_hi, temp_hi,
+						  GEN_INT (i)));
+		      emit_insn (gen_vec_setv2df (target_lo, temp_lo,
+						  GEN_INT (i)));
+		    }
+		}
+	      emit_insn (gen_rtx_SET (target,
+				      gen_rtx_VEC_CONCAT (vmode, target_hi,
+							  target_lo)));
+	      break;
+
+	    default:
+	      gcc_unreachable ();
+	    }
+
+	}
+      return;
+    }
+
   if (ISA_HAS_LSX)
     {
       if (all_same)
@@ -8304,6 +10546,38 @@ loongarch_expand_lsx_cmp (rtx dest, enum rtx_code cond, rtx op0, rtx op1)
 	}
       break;
 
+    case E_V8SFmode:
+    case E_V4DFmode:
+      switch (cond)
+	{
+	case UNORDERED:
+	case ORDERED:
+	case EQ:
+	case NE:
+	case UNEQ:
+	case UNLE:
+	case UNLT:
+	  break;
+	case LTGT: cond = NE; break;
+	case UNGE: cond = UNLE; std::swap (op0, op1); break;
+	case UNGT: cond = UNLT; std::swap (op0, op1); break;
+	case LE: unspec = UNSPEC_LASX_XVFCMP_SLE; break;
+	case LT: unspec = UNSPEC_LASX_XVFCMP_SLT; break;
+	case GE: unspec = UNSPEC_LASX_XVFCMP_SLE; std::swap (op0, op1); break;
+	case GT: unspec = UNSPEC_LASX_XVFCMP_SLT; std::swap (op0, op1); break;
+	default:
+		 gcc_unreachable ();
+	}
+      if (unspec < 0)
+	loongarch_emit_binary (cond, dest, op0, op1);
+      else
+	{
+	  rtx x = gen_rtx_UNSPEC (GET_MODE (dest),
+				  gen_rtvec (2, op0, op1), unspec);
+	  emit_insn (gen_rtx_SET (dest, x));
+	}
+      break;
+
     default:
       gcc_unreachable ();
       break;
@@ -8641,7 +10915,7 @@ loongarch_builtin_support_vector_misalignment (machine_mode mode,
 					       int misalignment,
 					       bool is_packed)
 {
-  if (ISA_HAS_LSX && STRICT_ALIGNMENT)
+  if ((ISA_HAS_LSX || ISA_HAS_LASX) && STRICT_ALIGNMENT)
     {
       if (optab_handler (movmisalign_optab, mode) == CODE_FOR_nothing)
 	return false;
diff --git a/gcc/config/loongarch/loongarch.h b/gcc/config/loongarch/loongarch.h
index e939dd826d1..39852d2bb12 100644
--- a/gcc/config/loongarch/loongarch.h
+++ b/gcc/config/loongarch/loongarch.h
@@ -186,6 +186,11 @@ along with GCC; see the file COPYING3.  If not see
 /* Width of a LSX vector register in bits.  */
 #define BITS_PER_LSX_REG (UNITS_PER_LSX_REG * BITS_PER_UNIT)
 
+/* Width of a LASX vector register in bytes.  */
+#define UNITS_PER_LASX_REG 32
+/* Width of a LASX vector register in bits.  */
+#define BITS_PER_LASX_REG (UNITS_PER_LASX_REG * BITS_PER_UNIT)
+
 /* For LARCH, width of a floating point register.  */
 #define UNITS_PER_FPREG (TARGET_DOUBLE_FLOAT ? 8 : 4)
 
@@ -248,10 +253,11 @@ along with GCC; see the file COPYING3.  If not see
 #define STRUCTURE_SIZE_BOUNDARY 8
 
 /* There is no point aligning anything to a rounder boundary than
-   LONG_DOUBLE_TYPE_SIZE, unless under LSX the bigggest alignment is
-   BITS_PER_LSX_REG/..  */
+   LONG_DOUBLE_TYPE_SIZE, unless under LSX/LASX the bigggest alignment is
+   BITS_PER_LSX_REG/BITS_PER_LASX_REG/..  */
 #define BIGGEST_ALIGNMENT \
-  (ISA_HAS_LSX ? BITS_PER_LSX_REG : LONG_DOUBLE_TYPE_SIZE)
+  (ISA_HAS_LASX? BITS_PER_LASX_REG \
+   : (ISA_HAS_LSX ? BITS_PER_LSX_REG : LONG_DOUBLE_TYPE_SIZE))
 
 /* All accesses must be aligned.  */
 #define STRICT_ALIGNMENT (TARGET_STRICT_ALIGN)
@@ -391,6 +397,10 @@ along with GCC; see the file COPYING3.  If not see
 #define LSX_REG_LAST  FP_REG_LAST
 #define LSX_REG_NUM   FP_REG_NUM
 
+#define LASX_REG_FIRST FP_REG_FIRST
+#define LASX_REG_LAST  FP_REG_LAST
+#define LASX_REG_NUM   FP_REG_NUM
+
 /* The DWARF 2 CFA column which tracks the return address from a
    signal handler context.  This means that to maintain backwards
    compatibility, no hard register can be assigned this column if it
@@ -409,9 +419,12 @@ along with GCC; see the file COPYING3.  If not see
   ((unsigned int) ((int) (REGNO) - FCC_REG_FIRST) < FCC_REG_NUM)
 #define LSX_REG_P(REGNO) \
   ((unsigned int) ((int) (REGNO) - LSX_REG_FIRST) < LSX_REG_NUM)
+#define LASX_REG_P(REGNO) \
+  ((unsigned int) ((int) (REGNO) - LASX_REG_FIRST) < LASX_REG_NUM)
 
 #define FP_REG_RTX_P(X) (REG_P (X) && FP_REG_P (REGNO (X)))
 #define LSX_REG_RTX_P(X) (REG_P (X) && LSX_REG_P (REGNO (X)))
+#define LASX_REG_RTX_P(X) (REG_P (X) && LASX_REG_P (REGNO (X)))
 
 /* Select a register mode required for caller save of hard regno REGNO.  */
 #define HARD_REGNO_CALLER_SAVE_MODE(REGNO, NREGS, MODE) \
@@ -733,6 +746,13 @@ enum reg_class
    && (GET_MODE_CLASS (MODE) == MODE_VECTOR_INT		\
        || GET_MODE_CLASS (MODE) == MODE_VECTOR_FLOAT))
 
+#define LASX_SUPPORTED_MODE_P(MODE)			\
+  (ISA_HAS_LASX						\
+   && (GET_MODE_SIZE (MODE) == UNITS_PER_LSX_REG	\
+       ||GET_MODE_SIZE (MODE) == UNITS_PER_LASX_REG)	\
+   && (GET_MODE_CLASS (MODE) == MODE_VECTOR_INT		\
+       || GET_MODE_CLASS (MODE) == MODE_VECTOR_FLOAT))
+
 /* 1 if N is a possible register number for function argument passing.
    We have no FP argument registers when soft-float.  */
 
@@ -985,7 +1005,39 @@ typedef struct {
   { "vr28",	28 + FP_REG_FIRST },					\
   { "vr29",	29 + FP_REG_FIRST },					\
   { "vr30",	30 + FP_REG_FIRST },					\
-  { "vr31",	31 + FP_REG_FIRST }					\
+  { "vr31",	31 + FP_REG_FIRST },					\
+  { "xr0",	 0 + FP_REG_FIRST },					\
+  { "xr1",	 1 + FP_REG_FIRST },					\
+  { "xr2",	 2 + FP_REG_FIRST },					\
+  { "xr3",	 3 + FP_REG_FIRST },					\
+  { "xr4",	 4 + FP_REG_FIRST },					\
+  { "xr5",	 5 + FP_REG_FIRST },					\
+  { "xr6",	 6 + FP_REG_FIRST },					\
+  { "xr7",	 7 + FP_REG_FIRST },					\
+  { "xr8",	 8 + FP_REG_FIRST },					\
+  { "xr9",	 9 + FP_REG_FIRST },					\
+  { "xr10",	10 + FP_REG_FIRST },					\
+  { "xr11",	11 + FP_REG_FIRST },					\
+  { "xr12",	12 + FP_REG_FIRST },					\
+  { "xr13",	13 + FP_REG_FIRST },					\
+  { "xr14",	14 + FP_REG_FIRST },					\
+  { "xr15",	15 + FP_REG_FIRST },					\
+  { "xr16",	16 + FP_REG_FIRST },					\
+  { "xr17",	17 + FP_REG_FIRST },					\
+  { "xr18",	18 + FP_REG_FIRST },					\
+  { "xr19",	19 + FP_REG_FIRST },					\
+  { "xr20",	20 + FP_REG_FIRST },					\
+  { "xr21",	21 + FP_REG_FIRST },					\
+  { "xr22",	22 + FP_REG_FIRST },					\
+  { "xr23",	23 + FP_REG_FIRST },					\
+  { "xr24",	24 + FP_REG_FIRST },					\
+  { "xr25",	25 + FP_REG_FIRST },					\
+  { "xr26",	26 + FP_REG_FIRST },					\
+  { "xr27",	27 + FP_REG_FIRST },					\
+  { "xr28",	28 + FP_REG_FIRST },					\
+  { "xr29",	29 + FP_REG_FIRST },					\
+  { "xr30",	30 + FP_REG_FIRST },					\
+  { "xr31",	31 + FP_REG_FIRST }					\
 }
 
 /* Globalizing directive for a label.  */
diff --git a/gcc/config/loongarch/loongarch.md b/gcc/config/loongarch/loongarch.md
index 7b8978e2533..30b2cb91e9a 100644
--- a/gcc/config/loongarch/loongarch.md
+++ b/gcc/config/loongarch/loongarch.md
@@ -163,7 +163,7 @@ (define_attr "alu_type" "unknown,add,sub,not,nor,and,or,xor,simd_add"
 
 ;; Main data type used by the insn
 (define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF,FCC,
-  V2DI,V4SI,V8HI,V16QI,V2DF,V4SF"
+  V2DI,V4SI,V8HI,V16QI,V2DF,V4SF,V4DI,V8SI,V16HI,V32QI,V4DF,V8SF"
   (const_string "unknown"))
 
 ;; True if the main data type is twice the size of a word.
@@ -422,12 +422,14 @@ (define_mode_attr ifmt [(SI "w") (DI "l")])
 ;; floating-point mode or vector mode.
 (define_mode_attr UNITMODE [(SF "SF") (DF "DF") (V2SF "SF") (V4SF "SF")
 			    (V16QI "QI") (V8HI "HI") (V4SI "SI") (V2DI "DI")
-			    (V2DF "DF")])
+			    (V2DF "DF")(V8SF "SF")(V32QI "QI")(V16HI "HI")(V8SI "SI")(V4DI "DI")(V4DF "DF")])
 
 ;; As above, but in lower case.
 (define_mode_attr unitmode [(SF "sf") (DF "df") (V2SF "sf") (V4SF "sf")
 			    (V16QI "qi") (V8QI "qi") (V8HI "hi") (V4HI "hi")
-			    (V4SI "si") (V2SI "si") (V2DI "di") (V2DF "df")])
+			    (V4SI "si") (V2SI "si") (V2DI "di") (V2DF "df")
+			    (V8SI "si") (V4DI "di") (V32QI "qi") (V16HI "hi")
+			    (V8SF "sf") (V4DF "df")])
 
 ;; This attribute gives the integer mode that has half the size of
 ;; the controlling mode.
@@ -711,16 +713,17 @@ (define_insn "sub<mode>3"
   [(set_attr "alu_type" "sub")
    (set_attr "mode" "<MODE>")])
 
+
 (define_insn "*subsi3_extended"
-  [(set (match_operand:DI 0 "register_operand" "= r")
+  [(set (match_operand:DI 0 "register_operand" "=r")
 	(sign_extend:DI
-	    (minus:SI (match_operand:SI 1 "reg_or_0_operand" " rJ")
-		      (match_operand:SI 2 "register_operand" "  r"))))]
+	    (minus:SI (match_operand:SI 1 "reg_or_0_operand" "rJ")
+		      (match_operand:SI 2 "register_operand" "r"))))]
   "TARGET_64BIT"
   "sub.w\t%0,%z1,%2"
   [(set_attr "type" "arith")
    (set_attr "mode" "SI")])
-\f
+
 ;;
 ;;  ....................
 ;;
@@ -3634,6 +3637,9 @@ (define_insn "loongarch_crcc_w_<size>_w"
 ; The LoongArch SX Instructions.
 (include "lsx.md")
 
+; The LoongArch ASX Instructions.
+(include "lasx.md")
+
 (define_c_enum "unspec" [
   UNSPEC_ADDRESS_FIRST
 ])
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 6/8] LoongArch: Added Loongson ASX directive builtin function support.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
                   ` (4 preceding siblings ...)
  2023-07-18 11:06 ` [PATCH v2 5/8] LoongArch: Added Loongson ASX base instruction support Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 7/8] LoongArch: Add Loongson SX directive test cases Chenghui Pan
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/ChangeLog:

	* config.gcc: Export the header file lasxintrin.h.
	* config/loongarch/loongarch-builtins.cc (enum loongarch_builtin_type):
	Add Loongson ASX builtin functions support.
	(AVAIL_ALL): Ditto.
	(LASX_BUILTIN): Ditto.
	(LASX_NO_TARGET_BUILTIN): Ditto.
	(LASX_BUILTIN_TEST_BRANCH): Ditto.
	(CODE_FOR_lasx_xvsadd_b): Ditto.
	(CODE_FOR_lasx_xvsadd_h): Ditto.
	(CODE_FOR_lasx_xvsadd_w): Ditto.
	(CODE_FOR_lasx_xvsadd_d): Ditto.
	(CODE_FOR_lasx_xvsadd_bu): Ditto.
	(CODE_FOR_lasx_xvsadd_hu): Ditto.
	(CODE_FOR_lasx_xvsadd_wu): Ditto.
	(CODE_FOR_lasx_xvsadd_du): Ditto.
	(CODE_FOR_lasx_xvadd_b): Ditto.
	(CODE_FOR_lasx_xvadd_h): Ditto.
	(CODE_FOR_lasx_xvadd_w): Ditto.
	(CODE_FOR_lasx_xvadd_d): Ditto.
	(CODE_FOR_lasx_xvaddi_bu): Ditto.
	(CODE_FOR_lasx_xvaddi_hu): Ditto.
	(CODE_FOR_lasx_xvaddi_wu): Ditto.
	(CODE_FOR_lasx_xvaddi_du): Ditto.
	(CODE_FOR_lasx_xvand_v): Ditto.
	(CODE_FOR_lasx_xvandi_b): Ditto.
	(CODE_FOR_lasx_xvbitsel_v): Ditto.
	(CODE_FOR_lasx_xvseqi_b): Ditto.
	(CODE_FOR_lasx_xvseqi_h): Ditto.
	(CODE_FOR_lasx_xvseqi_w): Ditto.
	(CODE_FOR_lasx_xvseqi_d): Ditto.
	(CODE_FOR_lasx_xvslti_b): Ditto.
	(CODE_FOR_lasx_xvslti_h): Ditto.
	(CODE_FOR_lasx_xvslti_w): Ditto.
	(CODE_FOR_lasx_xvslti_d): Ditto.
	(CODE_FOR_lasx_xvslti_bu): Ditto.
	(CODE_FOR_lasx_xvslti_hu): Ditto.
	(CODE_FOR_lasx_xvslti_wu): Ditto.
	(CODE_FOR_lasx_xvslti_du): Ditto.
	(CODE_FOR_lasx_xvslei_b): Ditto.
	(CODE_FOR_lasx_xvslei_h): Ditto.
	(CODE_FOR_lasx_xvslei_w): Ditto.
	(CODE_FOR_lasx_xvslei_d): Ditto.
	(CODE_FOR_lasx_xvslei_bu): Ditto.
	(CODE_FOR_lasx_xvslei_hu): Ditto.
	(CODE_FOR_lasx_xvslei_wu): Ditto.
	(CODE_FOR_lasx_xvslei_du): Ditto.
	(CODE_FOR_lasx_xvdiv_b): Ditto.
	(CODE_FOR_lasx_xvdiv_h): Ditto.
	(CODE_FOR_lasx_xvdiv_w): Ditto.
	(CODE_FOR_lasx_xvdiv_d): Ditto.
	(CODE_FOR_lasx_xvdiv_bu): Ditto.
	(CODE_FOR_lasx_xvdiv_hu): Ditto.
	(CODE_FOR_lasx_xvdiv_wu): Ditto.
	(CODE_FOR_lasx_xvdiv_du): Ditto.
	(CODE_FOR_lasx_xvfadd_s): Ditto.
	(CODE_FOR_lasx_xvfadd_d): Ditto.
	(CODE_FOR_lasx_xvftintrz_w_s): Ditto.
	(CODE_FOR_lasx_xvftintrz_l_d): Ditto.
	(CODE_FOR_lasx_xvftintrz_wu_s): Ditto.
	(CODE_FOR_lasx_xvftintrz_lu_d): Ditto.
	(CODE_FOR_lasx_xvffint_s_w): Ditto.
	(CODE_FOR_lasx_xvffint_d_l): Ditto.
	(CODE_FOR_lasx_xvffint_s_wu): Ditto.
	(CODE_FOR_lasx_xvffint_d_lu): Ditto.
	(CODE_FOR_lasx_xvfsub_s): Ditto.
	(CODE_FOR_lasx_xvfsub_d): Ditto.
	(CODE_FOR_lasx_xvfmul_s): Ditto.
	(CODE_FOR_lasx_xvfmul_d): Ditto.
	(CODE_FOR_lasx_xvfdiv_s): Ditto.
	(CODE_FOR_lasx_xvfdiv_d): Ditto.
	(CODE_FOR_lasx_xvfmax_s): Ditto.
	(CODE_FOR_lasx_xvfmax_d): Ditto.
	(CODE_FOR_lasx_xvfmin_s): Ditto.
	(CODE_FOR_lasx_xvfmin_d): Ditto.
	(CODE_FOR_lasx_xvfsqrt_s): Ditto.
	(CODE_FOR_lasx_xvfsqrt_d): Ditto.
	(CODE_FOR_lasx_xvflogb_s): Ditto.
	(CODE_FOR_lasx_xvflogb_d): Ditto.
	(CODE_FOR_lasx_xvmax_b): Ditto.
	(CODE_FOR_lasx_xvmax_h): Ditto.
	(CODE_FOR_lasx_xvmax_w): Ditto.
	(CODE_FOR_lasx_xvmax_d): Ditto.
	(CODE_FOR_lasx_xvmaxi_b): Ditto.
	(CODE_FOR_lasx_xvmaxi_h): Ditto.
	(CODE_FOR_lasx_xvmaxi_w): Ditto.
	(CODE_FOR_lasx_xvmaxi_d): Ditto.
	(CODE_FOR_lasx_xvmax_bu): Ditto.
	(CODE_FOR_lasx_xvmax_hu): Ditto.
	(CODE_FOR_lasx_xvmax_wu): Ditto.
	(CODE_FOR_lasx_xvmax_du): Ditto.
	(CODE_FOR_lasx_xvmaxi_bu): Ditto.
	(CODE_FOR_lasx_xvmaxi_hu): Ditto.
	(CODE_FOR_lasx_xvmaxi_wu): Ditto.
	(CODE_FOR_lasx_xvmaxi_du): Ditto.
	(CODE_FOR_lasx_xvmin_b): Ditto.
	(CODE_FOR_lasx_xvmin_h): Ditto.
	(CODE_FOR_lasx_xvmin_w): Ditto.
	(CODE_FOR_lasx_xvmin_d): Ditto.
	(CODE_FOR_lasx_xvmini_b): Ditto.
	(CODE_FOR_lasx_xvmini_h): Ditto.
	(CODE_FOR_lasx_xvmini_w): Ditto.
	(CODE_FOR_lasx_xvmini_d): Ditto.
	(CODE_FOR_lasx_xvmin_bu): Ditto.
	(CODE_FOR_lasx_xvmin_hu): Ditto.
	(CODE_FOR_lasx_xvmin_wu): Ditto.
	(CODE_FOR_lasx_xvmin_du): Ditto.
	(CODE_FOR_lasx_xvmini_bu): Ditto.
	(CODE_FOR_lasx_xvmini_hu): Ditto.
	(CODE_FOR_lasx_xvmini_wu): Ditto.
	(CODE_FOR_lasx_xvmini_du): Ditto.
	(CODE_FOR_lasx_xvmod_b): Ditto.
	(CODE_FOR_lasx_xvmod_h): Ditto.
	(CODE_FOR_lasx_xvmod_w): Ditto.
	(CODE_FOR_lasx_xvmod_d): Ditto.
	(CODE_FOR_lasx_xvmod_bu): Ditto.
	(CODE_FOR_lasx_xvmod_hu): Ditto.
	(CODE_FOR_lasx_xvmod_wu): Ditto.
	(CODE_FOR_lasx_xvmod_du): Ditto.
	(CODE_FOR_lasx_xvmul_b): Ditto.
	(CODE_FOR_lasx_xvmul_h): Ditto.
	(CODE_FOR_lasx_xvmul_w): Ditto.
	(CODE_FOR_lasx_xvmul_d): Ditto.
	(CODE_FOR_lasx_xvclz_b): Ditto.
	(CODE_FOR_lasx_xvclz_h): Ditto.
	(CODE_FOR_lasx_xvclz_w): Ditto.
	(CODE_FOR_lasx_xvclz_d): Ditto.
	(CODE_FOR_lasx_xvnor_v): Ditto.
	(CODE_FOR_lasx_xvor_v): Ditto.
	(CODE_FOR_lasx_xvori_b): Ditto.
	(CODE_FOR_lasx_xvnori_b): Ditto.
	(CODE_FOR_lasx_xvpcnt_b): Ditto.
	(CODE_FOR_lasx_xvpcnt_h): Ditto.
	(CODE_FOR_lasx_xvpcnt_w): Ditto.
	(CODE_FOR_lasx_xvpcnt_d): Ditto.
	(CODE_FOR_lasx_xvxor_v): Ditto.
	(CODE_FOR_lasx_xvxori_b): Ditto.
	(CODE_FOR_lasx_xvsll_b): Ditto.
	(CODE_FOR_lasx_xvsll_h): Ditto.
	(CODE_FOR_lasx_xvsll_w): Ditto.
	(CODE_FOR_lasx_xvsll_d): Ditto.
	(CODE_FOR_lasx_xvslli_b): Ditto.
	(CODE_FOR_lasx_xvslli_h): Ditto.
	(CODE_FOR_lasx_xvslli_w): Ditto.
	(CODE_FOR_lasx_xvslli_d): Ditto.
	(CODE_FOR_lasx_xvsra_b): Ditto.
	(CODE_FOR_lasx_xvsra_h): Ditto.
	(CODE_FOR_lasx_xvsra_w): Ditto.
	(CODE_FOR_lasx_xvsra_d): Ditto.
	(CODE_FOR_lasx_xvsrai_b): Ditto.
	(CODE_FOR_lasx_xvsrai_h): Ditto.
	(CODE_FOR_lasx_xvsrai_w): Ditto.
	(CODE_FOR_lasx_xvsrai_d): Ditto.
	(CODE_FOR_lasx_xvsrl_b): Ditto.
	(CODE_FOR_lasx_xvsrl_h): Ditto.
	(CODE_FOR_lasx_xvsrl_w): Ditto.
	(CODE_FOR_lasx_xvsrl_d): Ditto.
	(CODE_FOR_lasx_xvsrli_b): Ditto.
	(CODE_FOR_lasx_xvsrli_h): Ditto.
	(CODE_FOR_lasx_xvsrli_w): Ditto.
	(CODE_FOR_lasx_xvsrli_d): Ditto.
	(CODE_FOR_lasx_xvsub_b): Ditto.
	(CODE_FOR_lasx_xvsub_h): Ditto.
	(CODE_FOR_lasx_xvsub_w): Ditto.
	(CODE_FOR_lasx_xvsub_d): Ditto.
	(CODE_FOR_lasx_xvsubi_bu): Ditto.
	(CODE_FOR_lasx_xvsubi_hu): Ditto.
	(CODE_FOR_lasx_xvsubi_wu): Ditto.
	(CODE_FOR_lasx_xvsubi_du): Ditto.
	(CODE_FOR_lasx_xvpackod_d): Ditto.
	(CODE_FOR_lasx_xvpackev_d): Ditto.
	(CODE_FOR_lasx_xvpickod_d): Ditto.
	(CODE_FOR_lasx_xvpickev_d): Ditto.
	(CODE_FOR_lasx_xvrepli_b): Ditto.
	(CODE_FOR_lasx_xvrepli_h): Ditto.
	(CODE_FOR_lasx_xvrepli_w): Ditto.
	(CODE_FOR_lasx_xvrepli_d): Ditto.
	(CODE_FOR_lasx_xvandn_v): Ditto.
	(CODE_FOR_lasx_xvorn_v): Ditto.
	(CODE_FOR_lasx_xvneg_b): Ditto.
	(CODE_FOR_lasx_xvneg_h): Ditto.
	(CODE_FOR_lasx_xvneg_w): Ditto.
	(CODE_FOR_lasx_xvneg_d): Ditto.
	(CODE_FOR_lasx_xvbsrl_v): Ditto.
	(CODE_FOR_lasx_xvbsll_v): Ditto.
	(CODE_FOR_lasx_xvfmadd_s): Ditto.
	(CODE_FOR_lasx_xvfmadd_d): Ditto.
	(CODE_FOR_lasx_xvfmsub_s): Ditto.
	(CODE_FOR_lasx_xvfmsub_d): Ditto.
	(CODE_FOR_lasx_xvfnmadd_s): Ditto.
	(CODE_FOR_lasx_xvfnmadd_d): Ditto.
	(CODE_FOR_lasx_xvfnmsub_s): Ditto.
	(CODE_FOR_lasx_xvfnmsub_d): Ditto.
	(CODE_FOR_lasx_xvpermi_q): Ditto.
	(CODE_FOR_lasx_xvpermi_d): Ditto.
	(CODE_FOR_lasx_xbnz_v): Ditto.
	(CODE_FOR_lasx_xbz_v): Ditto.
	(CODE_FOR_lasx_xvssub_b): Ditto.
	(CODE_FOR_lasx_xvssub_h): Ditto.
	(CODE_FOR_lasx_xvssub_w): Ditto.
	(CODE_FOR_lasx_xvssub_d): Ditto.
	(CODE_FOR_lasx_xvssub_bu): Ditto.
	(CODE_FOR_lasx_xvssub_hu): Ditto.
	(CODE_FOR_lasx_xvssub_wu): Ditto.
	(CODE_FOR_lasx_xvssub_du): Ditto.
	(CODE_FOR_lasx_xvabsd_b): Ditto.
	(CODE_FOR_lasx_xvabsd_h): Ditto.
	(CODE_FOR_lasx_xvabsd_w): Ditto.
	(CODE_FOR_lasx_xvabsd_d): Ditto.
	(CODE_FOR_lasx_xvabsd_bu): Ditto.
	(CODE_FOR_lasx_xvabsd_hu): Ditto.
	(CODE_FOR_lasx_xvabsd_wu): Ditto.
	(CODE_FOR_lasx_xvabsd_du): Ditto.
	(CODE_FOR_lasx_xvavg_b): Ditto.
	(CODE_FOR_lasx_xvavg_h): Ditto.
	(CODE_FOR_lasx_xvavg_w): Ditto.
	(CODE_FOR_lasx_xvavg_d): Ditto.
	(CODE_FOR_lasx_xvavg_bu): Ditto.
	(CODE_FOR_lasx_xvavg_hu): Ditto.
	(CODE_FOR_lasx_xvavg_wu): Ditto.
	(CODE_FOR_lasx_xvavg_du): Ditto.
	(CODE_FOR_lasx_xvavgr_b): Ditto.
	(CODE_FOR_lasx_xvavgr_h): Ditto.
	(CODE_FOR_lasx_xvavgr_w): Ditto.
	(CODE_FOR_lasx_xvavgr_d): Ditto.
	(CODE_FOR_lasx_xvavgr_bu): Ditto.
	(CODE_FOR_lasx_xvavgr_hu): Ditto.
	(CODE_FOR_lasx_xvavgr_wu): Ditto.
	(CODE_FOR_lasx_xvavgr_du): Ditto.
	(CODE_FOR_lasx_xvmuh_b): Ditto.
	(CODE_FOR_lasx_xvmuh_h): Ditto.
	(CODE_FOR_lasx_xvmuh_w): Ditto.
	(CODE_FOR_lasx_xvmuh_d): Ditto.
	(CODE_FOR_lasx_xvmuh_bu): Ditto.
	(CODE_FOR_lasx_xvmuh_hu): Ditto.
	(CODE_FOR_lasx_xvmuh_wu): Ditto.
	(CODE_FOR_lasx_xvmuh_du): Ditto.
	(CODE_FOR_lasx_xvssran_b_h): Ditto.
	(CODE_FOR_lasx_xvssran_h_w): Ditto.
	(CODE_FOR_lasx_xvssran_w_d): Ditto.
	(CODE_FOR_lasx_xvssran_bu_h): Ditto.
	(CODE_FOR_lasx_xvssran_hu_w): Ditto.
	(CODE_FOR_lasx_xvssran_wu_d): Ditto.
	(CODE_FOR_lasx_xvssrarn_b_h): Ditto.
	(CODE_FOR_lasx_xvssrarn_h_w): Ditto.
	(CODE_FOR_lasx_xvssrarn_w_d): Ditto.
	(CODE_FOR_lasx_xvssrarn_bu_h): Ditto.
	(CODE_FOR_lasx_xvssrarn_hu_w): Ditto.
	(CODE_FOR_lasx_xvssrarn_wu_d): Ditto.
	(CODE_FOR_lasx_xvssrln_bu_h): Ditto.
	(CODE_FOR_lasx_xvssrln_hu_w): Ditto.
	(CODE_FOR_lasx_xvssrln_wu_d): Ditto.
	(CODE_FOR_lasx_xvssrlrn_bu_h): Ditto.
	(CODE_FOR_lasx_xvssrlrn_hu_w): Ditto.
	(CODE_FOR_lasx_xvssrlrn_wu_d): Ditto.
	(CODE_FOR_lasx_xvftint_w_s): Ditto.
	(CODE_FOR_lasx_xvftint_l_d): Ditto.
	(CODE_FOR_lasx_xvftint_wu_s): Ditto.
	(CODE_FOR_lasx_xvftint_lu_d): Ditto.
	(CODE_FOR_lasx_xvsllwil_h_b): Ditto.
	(CODE_FOR_lasx_xvsllwil_w_h): Ditto.
	(CODE_FOR_lasx_xvsllwil_d_w): Ditto.
	(CODE_FOR_lasx_xvsllwil_hu_bu): Ditto.
	(CODE_FOR_lasx_xvsllwil_wu_hu): Ditto.
	(CODE_FOR_lasx_xvsllwil_du_wu): Ditto.
	(CODE_FOR_lasx_xvsat_b): Ditto.
	(CODE_FOR_lasx_xvsat_h): Ditto.
	(CODE_FOR_lasx_xvsat_w): Ditto.
	(CODE_FOR_lasx_xvsat_d): Ditto.
	(CODE_FOR_lasx_xvsat_bu): Ditto.
	(CODE_FOR_lasx_xvsat_hu): Ditto.
	(CODE_FOR_lasx_xvsat_wu): Ditto.
	(CODE_FOR_lasx_xvsat_du): Ditto.
	(loongarch_builtin_vectorized_function): Ditto.
	(loongarch_expand_builtin_insn): Ditto.
	(loongarch_expand_builtin): Ditto.
	* config/loongarch/loongarch-ftypes.def (1): Ditto.
	(2): Ditto.
	(3): Ditto.
	(4): Ditto.
	* config/loongarch/lasxintrin.h: New file.
---
 gcc/config.gcc                             |    2 +-
 gcc/config/loongarch/lasxintrin.h          | 5342 ++++++++++++++++++++
 gcc/config/loongarch/loongarch-builtins.cc | 1180 ++++-
 gcc/config/loongarch/loongarch-ftypes.def  |  271 +-
 4 files changed, 6792 insertions(+), 3 deletions(-)
 create mode 100644 gcc/config/loongarch/lasxintrin.h

diff --git a/gcc/config.gcc b/gcc/config.gcc
index 3aa1d9dd4e6..865763cb157 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -468,7 +468,7 @@ mips*-*-*)
 	;;
 loongarch*-*-*)
 	cpu_type=loongarch
-	extra_headers="larchintrin.h lsxintrin.h"
+	extra_headers="larchintrin.h lsxintrin.h lasxintrin.h"
 	extra_objs="loongarch-c.o loongarch-builtins.o loongarch-cpu.o loongarch-opts.o loongarch-def.o"
 	extra_gcc_objs="loongarch-driver.o loongarch-cpu.o loongarch-opts.o loongarch-def.o"
 	extra_options="${extra_options} g.opt fused-madd.opt"
diff --git a/gcc/config/loongarch/lasxintrin.h b/gcc/config/loongarch/lasxintrin.h
new file mode 100644
index 00000000000..1cb63828738
--- /dev/null
+++ b/gcc/config/loongarch/lasxintrin.h
@@ -0,0 +1,5342 @@
+/* LARCH Loongson ASX intrinsics include file.
+
+   Copyright (C) 2018 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef _GCC_LOONGSON_ASXINTRIN_H
+#define _GCC_LOONGSON_ASXINTRIN_H 1
+
+#if defined(__loongarch_asx)
+
+typedef signed char v32i8 __attribute__ ((vector_size(32), aligned(32)));
+typedef signed char v32i8_b __attribute__ ((vector_size(32), aligned(1)));
+typedef unsigned char v32u8 __attribute__ ((vector_size(32), aligned(32)));
+typedef unsigned char v32u8_b __attribute__ ((vector_size(32), aligned(1)));
+typedef short v16i16 __attribute__ ((vector_size(32), aligned(32)));
+typedef short v16i16_h __attribute__ ((vector_size(32), aligned(2)));
+typedef unsigned short v16u16 __attribute__ ((vector_size(32), aligned(32)));
+typedef unsigned short v16u16_h __attribute__ ((vector_size(32), aligned(2)));
+typedef int v8i32 __attribute__ ((vector_size(32), aligned(32)));
+typedef int v8i32_w __attribute__ ((vector_size(32), aligned(4)));
+typedef unsigned int v8u32 __attribute__ ((vector_size(32), aligned(32)));
+typedef unsigned int v8u32_w __attribute__ ((vector_size(32), aligned(4)));
+typedef long long v4i64 __attribute__ ((vector_size(32), aligned(32)));
+typedef long long v4i64_d __attribute__ ((vector_size(32), aligned(8)));
+typedef unsigned long long v4u64 __attribute__ ((vector_size(32), aligned(32)));
+typedef unsigned long long v4u64_d __attribute__ ((vector_size(32), aligned(8)));
+typedef float v8f32 __attribute__ ((vector_size(32), aligned(32)));
+typedef float v8f32_w __attribute__ ((vector_size(32), aligned(4)));
+typedef double v4f64 __attribute__ ((vector_size(32), aligned(32)));
+typedef double v4f64_d __attribute__ ((vector_size(32), aligned(8)));
+
+typedef double v4f64 __attribute__ ((vector_size(32), aligned(32)));
+typedef double v4f64_d __attribute__ ((vector_size(32), aligned(8)));
+
+typedef float __m256 __attribute__ ((__vector_size__ (32),
+				     __may_alias__));
+typedef long long __m256i __attribute__ ((__vector_size__ (32),
+					  __may_alias__));
+typedef double __m256d __attribute__ ((__vector_size__ (32),
+				       __may_alias__));
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsll_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsll_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsll_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsll_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsll_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsll_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsll_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsll_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvslli_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvslli_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvslli_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvslli_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvslli_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslli_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvslli_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvslli_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsra_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsra_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsra_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsra_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsra_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsra_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsra_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsra_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvsrai_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrai_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvsrai_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrai_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvsrai_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrai_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvsrai_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrai_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrar_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrar_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrar_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrar_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrar_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrar_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrar_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrar_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvsrari_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrari_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvsrari_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrari_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvsrari_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrari_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvsrari_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrari_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrl_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrl_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrl_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrl_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrl_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrl_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrl_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrl_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvsrli_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrli_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvsrli_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrli_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvsrli_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrli_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvsrli_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrli_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlr_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlr_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlr_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlr_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlr_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlr_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlr_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlr_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvsrlri_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrlri_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvsrlri_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrlri_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvsrlri_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrlri_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvsrlri_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvsrlri_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitclr_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitclr_b ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitclr_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitclr_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitclr_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitclr_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitclr_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitclr_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvbitclri_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitclri_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UQI.  */
+#define __lasx_xvbitclri_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitclri_h ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UQI.  */
+#define __lasx_xvbitclri_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitclri_w ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UQI.  */
+#define __lasx_xvbitclri_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitclri_d ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitset_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitset_b ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitset_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitset_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitset_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitset_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitset_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitset_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvbitseti_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitseti_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UQI.  */
+#define __lasx_xvbitseti_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitseti_h ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UQI.  */
+#define __lasx_xvbitseti_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitseti_w ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UQI.  */
+#define __lasx_xvbitseti_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitseti_d ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitrev_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitrev_b ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitrev_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitrev_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitrev_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitrev_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitrev_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvbitrev_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvbitrevi_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitrevi_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UQI.  */
+#define __lasx_xvbitrevi_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitrevi_h ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UQI.  */
+#define __lasx_xvbitrevi_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitrevi_w ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UQI.  */
+#define __lasx_xvbitrevi_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvbitrevi_d ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadd_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadd_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadd_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadd_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadd_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadd_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadd_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadd_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvaddi_bu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvaddi_bu ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvaddi_hu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvaddi_hu ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvaddi_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvaddi_wu ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvaddi_du(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvaddi_du ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsub_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsub_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsub_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsub_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsub_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsub_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsub_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsub_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvsubi_bu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsubi_bu ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvsubi_hu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsubi_hu ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvsubi_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsubi_wu ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvsubi_du(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsubi_du ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V32QI, V32QI, QI.  */
+#define __lasx_xvmaxi_b(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V16HI, V16HI, QI.  */
+#define __lasx_xvmaxi_h(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V8SI, V8SI, QI.  */
+#define __lasx_xvmaxi_w(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V4DI, V4DI, QI.  */
+#define __lasx_xvmaxi_d(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmax_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmax_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvmaxi_bu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_bu ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UQI.  */
+#define __lasx_xvmaxi_hu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_hu ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UQI.  */
+#define __lasx_xvmaxi_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_wu ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UQI.  */
+#define __lasx_xvmaxi_du(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmaxi_du ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V32QI, V32QI, QI.  */
+#define __lasx_xvmini_b(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V16HI, V16HI, QI.  */
+#define __lasx_xvmini_h(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V8SI, V8SI, QI.  */
+#define __lasx_xvmini_w(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V4DI, V4DI, QI.  */
+#define __lasx_xvmini_d(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmin_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmin_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvmini_bu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_bu ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UQI.  */
+#define __lasx_xvmini_hu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_hu ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UQI.  */
+#define __lasx_xvmini_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_wu ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UQI.  */
+#define __lasx_xvmini_du(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvmini_du ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvseq_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvseq_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvseq_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvseq_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvseq_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvseq_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvseq_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvseq_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V32QI, V32QI, QI.  */
+#define __lasx_xvseqi_b(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvseqi_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V16HI, V16HI, QI.  */
+#define __lasx_xvseqi_h(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvseqi_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V8SI, V8SI, QI.  */
+#define __lasx_xvseqi_w(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvseqi_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V4DI, V4DI, QI.  */
+#define __lasx_xvseqi_d(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvseqi_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V32QI, V32QI, QI.  */
+#define __lasx_xvslti_b(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V16HI, V16HI, QI.  */
+#define __lasx_xvslti_h(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V8SI, V8SI, QI.  */
+#define __lasx_xvslti_w(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V4DI, V4DI, QI.  */
+#define __lasx_xvslti_d(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvslt_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvslt_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, UV32QI, UQI.  */
+#define __lasx_xvslti_bu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_bu ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, UV16HI, UQI.  */
+#define __lasx_xvslti_hu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_hu ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, UV8SI, UQI.  */
+#define __lasx_xvslti_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_wu ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UQI.  */
+#define __lasx_xvslti_du(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslti_du ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V32QI, V32QI, QI.  */
+#define __lasx_xvslei_b(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V16HI, V16HI, QI.  */
+#define __lasx_xvslei_h(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V8SI, V8SI, QI.  */
+#define __lasx_xvslei_w(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, si5.  */
+/* Data types in instruction templates:  V4DI, V4DI, QI.  */
+#define __lasx_xvslei_d(/*__m256i*/ _1, /*si5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsle_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsle_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, UV32QI, UQI.  */
+#define __lasx_xvslei_bu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_bu ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, UV16HI, UQI.  */
+#define __lasx_xvslei_hu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_hu ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, UV8SI, UQI.  */
+#define __lasx_xvslei_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_wu ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UQI.  */
+#define __lasx_xvslei_du(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvslei_du ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvsat_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvsat_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvsat_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvsat_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvsat_bu(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_bu ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UQI.  */
+#define __lasx_xvsat_hu(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_hu ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UQI.  */
+#define __lasx_xvsat_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_wu ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UQI.  */
+#define __lasx_xvsat_du(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvsat_du ((v4u64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadda_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadda_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadda_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadda_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadda_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadda_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadda_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadda_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsadd_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsadd_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavg_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavg_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvavgr_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvavgr_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssub_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssub_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvabsd_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvabsd_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmul_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmul_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmul_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmul_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmul_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmul_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmul_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmul_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmadd_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmadd_b ((v32i8)_1, (v32i8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmadd_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmadd_h ((v16i16)_1, (v16i16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmadd_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmadd_w ((v8i32)_1, (v8i32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmadd_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmadd_d ((v4i64)_1, (v4i64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmsub_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmsub_b ((v32i8)_1, (v32i8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmsub_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmsub_h ((v16i16)_1, (v16i16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmsub_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmsub_w ((v8i32)_1, (v8i32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmsub_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmsub_d ((v4i64)_1, (v4i64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvdiv_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvdiv_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_hu_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_hu_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_wu_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_wu_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_du_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_du_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_hu_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_hu_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_wu_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_wu_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_du_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_du_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmod_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmod_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvrepl128vei_b(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvrepl128vei_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvrepl128vei_h(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvrepl128vei_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui2.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvrepl128vei_w(/*__m256i*/ _1, /*ui2*/ _2) \
+  ((__m256i)__builtin_lasx_xvrepl128vei_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui1.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvrepl128vei_d(/*__m256i*/ _1, /*ui1*/ _2) \
+  ((__m256i)__builtin_lasx_xvrepl128vei_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickev_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickev_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickev_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickev_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickev_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickev_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickev_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickev_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickod_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickod_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickod_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickod_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickod_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickod_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpickod_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpickod_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvh_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvh_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvh_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvh_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvh_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvh_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvh_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvh_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvl_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvl_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvl_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvl_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvl_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvl_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvilvl_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvilvl_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackev_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackev_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackev_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackev_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackev_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackev_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackev_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackev_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackod_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackod_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackod_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackod_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackod_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackod_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpackod_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvpackod_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvshuf_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvshuf_b ((v32i8)_1, (v32i8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvshuf_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvshuf_h ((v16i16)_1, (v16i16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvshuf_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvshuf_w ((v8i32)_1, (v8i32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvshuf_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvshuf_d ((v4i64)_1, (v4i64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvand_v (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvand_v ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvandi_b(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvandi_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvor_v (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvor_v ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvori_b(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvori_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvnor_v (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvnor_v ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvnori_b(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvnori_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvxor_v (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvxor_v ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UQI.  */
+#define __lasx_xvxori_b(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvxori_b ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvbitsel_v (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvbitsel_v ((v32u8)_1, (v32u8)_2, (v32u8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI, USI.  */
+#define __lasx_xvbitseli_b(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvbitseli_b ((v32u8)(_1), (v32u8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V32QI, V32QI, USI.  */
+#define __lasx_xvshuf4i_b(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvshuf4i_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V16HI, V16HI, USI.  */
+#define __lasx_xvshuf4i_h(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvshuf4i_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V8SI, V8SI, USI.  */
+#define __lasx_xvshuf4i_w(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvshuf4i_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, rj.  */
+/* Data types in instruction templates:  V32QI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplgr2vr_b (int _1)
+{
+  return (__m256i)__builtin_lasx_xvreplgr2vr_b ((int)_1);
+}
+
+/* Assembly instruction format:	xd, rj.  */
+/* Data types in instruction templates:  V16HI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplgr2vr_h (int _1)
+{
+  return (__m256i)__builtin_lasx_xvreplgr2vr_h ((int)_1);
+}
+
+/* Assembly instruction format:	xd, rj.  */
+/* Data types in instruction templates:  V8SI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplgr2vr_w (int _1)
+{
+  return (__m256i)__builtin_lasx_xvreplgr2vr_w ((int)_1);
+}
+
+/* Assembly instruction format:	xd, rj.  */
+/* Data types in instruction templates:  V4DI, DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplgr2vr_d (long int _1)
+{
+  return (__m256i)__builtin_lasx_xvreplgr2vr_d ((long int)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpcnt_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvpcnt_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpcnt_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvpcnt_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpcnt_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvpcnt_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvpcnt_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvpcnt_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclo_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclo_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclo_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclo_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclo_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclo_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclo_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclo_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclz_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclz_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclz_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclz_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclz_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclz_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvclz_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvclz_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfadd_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfadd_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfadd_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfadd_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfsub_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfsub_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfsub_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfsub_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmul_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfmul_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmul_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfmul_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfdiv_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfdiv_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfdiv_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfdiv_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcvt_h_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcvt_h_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfcvt_s_d (__m256d _1, __m256d _2)
+{
+  return (__m256)__builtin_lasx_xvfcvt_s_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmin_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfmin_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmin_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfmin_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmina_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfmina_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmina_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfmina_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmax_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfmax_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmax_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfmax_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmaxa_s (__m256 _1, __m256 _2)
+{
+  return (__m256)__builtin_lasx_xvfmaxa_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmaxa_d (__m256d _1, __m256d _2)
+{
+  return (__m256d)__builtin_lasx_xvfmaxa_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfclass_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvfclass_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfclass_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvfclass_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfsqrt_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfsqrt_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfsqrt_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfsqrt_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrecip_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrecip_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrecip_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrecip_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrint_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrint_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrint_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrint_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrsqrt_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrsqrt_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrsqrt_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrsqrt_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvflogb_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvflogb_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvflogb_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvflogb_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfcvth_s_h (__m256i _1)
+{
+  return (__m256)__builtin_lasx_xvfcvth_s_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfcvth_d_s (__m256 _1)
+{
+  return (__m256d)__builtin_lasx_xvfcvth_d_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfcvtl_s_h (__m256i _1)
+{
+  return (__m256)__builtin_lasx_xvfcvtl_s_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfcvtl_d_s (__m256 _1)
+{
+  return (__m256d)__builtin_lasx_xvfcvtl_d_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftint_w_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftint_w_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftint_l_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftint_l_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftint_wu_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftint_wu_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftint_lu_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftint_lu_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrz_w_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrz_w_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrz_l_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrz_l_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrz_wu_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrz_wu_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrz_lu_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrz_lu_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvffint_s_w (__m256i _1)
+{
+  return (__m256)__builtin_lasx_xvffint_s_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvffint_d_l (__m256i _1)
+{
+  return (__m256d)__builtin_lasx_xvffint_d_l ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SF, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvffint_s_wu (__m256i _1)
+{
+  return (__m256)__builtin_lasx_xvffint_s_wu ((v8u32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvffint_d_lu (__m256i _1)
+{
+  return (__m256d)__builtin_lasx_xvffint_d_lu ((v4u64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, rk.  */
+/* Data types in instruction templates:  V32QI, V32QI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve_b (__m256i _1, int _2)
+{
+  return (__m256i)__builtin_lasx_xvreplve_b ((v32i8)_1, (int)_2);
+}
+
+/* Assembly instruction format:	xd, xj, rk.  */
+/* Data types in instruction templates:  V16HI, V16HI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve_h (__m256i _1, int _2)
+{
+  return (__m256i)__builtin_lasx_xvreplve_h ((v16i16)_1, (int)_2);
+}
+
+/* Assembly instruction format:	xd, xj, rk.  */
+/* Data types in instruction templates:  V8SI, V8SI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve_w (__m256i _1, int _2)
+{
+  return (__m256i)__builtin_lasx_xvreplve_w ((v8i32)_1, (int)_2);
+}
+
+/* Assembly instruction format:	xd, xj, rk.  */
+/* Data types in instruction templates:  V4DI, V4DI, SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve_d (__m256i _1, int _2)
+{
+  return (__m256i)__builtin_lasx_xvreplve_d ((v4i64)_1, (int)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvpermi_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvpermi_w ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvandn_v (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvandn_v ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvneg_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvneg_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvneg_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvneg_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvneg_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvneg_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvneg_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvneg_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmuh_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmuh_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V16HI, V32QI, UQI.  */
+#define __lasx_xvsllwil_h_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsllwil_h_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V8SI, V16HI, UQI.  */
+#define __lasx_xvsllwil_w_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsllwil_w_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V4DI, V8SI, UQI.  */
+#define __lasx_xvsllwil_d_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsllwil_d_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  UV16HI, UV32QI, UQI.  */
+#define __lasx_xvsllwil_hu_bu(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvsllwil_hu_bu ((v32u8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV8SI, UV16HI, UQI.  */
+#define __lasx_xvsllwil_wu_hu(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvsllwil_wu_hu ((v16u16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV4DI, UV8SI, UQI.  */
+#define __lasx_xvsllwil_du_wu(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvsllwil_du_wu ((v8u32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsran_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsran_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsran_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsran_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsran_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsran_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssran_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssran_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssran_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssran_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssran_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssran_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssran_bu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssran_bu_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssran_hu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssran_hu_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssran_wu_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssran_wu_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrarn_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrarn_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrarn_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrarn_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrarn_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrarn_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrarn_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrarn_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrarn_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrarn_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrarn_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrarn_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrarn_bu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrarn_bu_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrarn_hu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrarn_hu_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrarn_wu_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrarn_wu_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrln_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrln_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrln_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrln_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrln_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrln_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrln_bu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrln_bu_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrln_hu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrln_hu_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrln_wu_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrln_wu_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlrn_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlrn_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlrn_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlrn_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsrlrn_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsrlrn_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV32QI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrlrn_bu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrlrn_bu_h ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrlrn_hu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrlrn_hu_w ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrlrn_wu_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrlrn_wu_d ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, UQI.  */
+#define __lasx_xvfrstpi_b(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvfrstpi_b ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, UQI.  */
+#define __lasx_xvfrstpi_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvfrstpi_h ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfrstp_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvfrstp_b ((v32i8)_1, (v32i8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfrstp_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvfrstp_h ((v16i16)_1, (v16i16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvshuf4i_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvshuf4i_d ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvbsrl_v(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvbsrl_v ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvbsll_v(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvbsll_v ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvextrins_b(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvextrins_b ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvextrins_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvextrins_h ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvextrins_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvextrins_w ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvextrins_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvextrins_d ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmskltz_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvmskltz_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmskltz_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvmskltz_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmskltz_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvmskltz_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmskltz_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvmskltz_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsigncov_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsigncov_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsigncov_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsigncov_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsigncov_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsigncov_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsigncov_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsigncov_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmadd_s (__m256 _1, __m256 _2, __m256 _3)
+{
+  return (__m256)__builtin_lasx_xvfmadd_s ((v8f32)_1, (v8f32)_2, (v8f32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmadd_d (__m256d _1, __m256d _2, __m256d _3)
+{
+  return (__m256d)__builtin_lasx_xvfmadd_d ((v4f64)_1, (v4f64)_2, (v4f64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfmsub_s (__m256 _1, __m256 _2, __m256 _3)
+{
+  return (__m256)__builtin_lasx_xvfmsub_s ((v8f32)_1, (v8f32)_2, (v8f32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfmsub_d (__m256d _1, __m256d _2, __m256d _3)
+{
+  return (__m256d)__builtin_lasx_xvfmsub_d ((v4f64)_1, (v4f64)_2, (v4f64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfnmadd_s (__m256 _1, __m256 _2, __m256 _3)
+{
+  return (__m256)__builtin_lasx_xvfnmadd_s ((v8f32)_1, (v8f32)_2, (v8f32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfnmadd_d (__m256d _1, __m256d _2, __m256d _3)
+{
+  return (__m256d)__builtin_lasx_xvfnmadd_d ((v4f64)_1, (v4f64)_2, (v4f64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V8SF, V8SF, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfnmsub_s (__m256 _1, __m256 _2, __m256 _3)
+{
+  return (__m256)__builtin_lasx_xvfnmsub_s ((v8f32)_1, (v8f32)_2, (v8f32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk, xa.  */
+/* Data types in instruction templates:  V4DF, V4DF, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfnmsub_d (__m256d _1, __m256d _2, __m256d _3)
+{
+  return (__m256d)__builtin_lasx_xvfnmsub_d ((v4f64)_1, (v4f64)_2, (v4f64)_3);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrne_w_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrne_w_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrne_l_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrne_l_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrp_w_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrp_w_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrp_l_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrp_l_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrm_w_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrm_w_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrm_l_d (__m256d _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrm_l_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftint_w_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvftint_w_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SF, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvffint_s_l (__m256i _1, __m256i _2)
+{
+  return (__m256)__builtin_lasx_xvffint_s_l ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrz_w_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvftintrz_w_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrp_w_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvftintrp_w_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrm_w_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvftintrm_w_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrne_w_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvftintrne_w_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftinth_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftinth_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintl_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintl_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvffinth_d_w (__m256i _1)
+{
+  return (__m256d)__builtin_lasx_xvffinth_d_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DF, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvffintl_d_w (__m256i _1)
+{
+  return (__m256d)__builtin_lasx_xvffintl_d_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrzh_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrzh_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrzl_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrzl_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrph_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrph_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrpl_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrpl_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrmh_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrmh_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrml_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrml_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrneh_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrneh_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvftintrnel_l_s (__m256 _1)
+{
+  return (__m256i)__builtin_lasx_xvftintrnel_l_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrintrne_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrintrne_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrintrne_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrintrne_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrintrz_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrintrz_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrintrz_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrintrz_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrintrp_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrintrp_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrintrp_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrintrp_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256 __lasx_xvfrintrm_s (__m256 _1)
+{
+  return (__m256)__builtin_lasx_xvfrintrm_s ((v8f32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256d __lasx_xvfrintrm_d (__m256d _1)
+{
+  return (__m256d)__builtin_lasx_xvfrintrm_d ((v4f64)_1);
+}
+
+/* Assembly instruction format:	xd, rj, si12.  */
+/* Data types in instruction templates:  V32QI, CVPOINTER, SI.  */
+#define __lasx_xvld(/*void **/ _1, /*si12*/ _2) \
+  ((__m256i)__builtin_lasx_xvld ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	xd, rj, si12.  */
+/* Data types in instruction templates:  VOID, V32QI, CVPOINTER, SI.  */
+#define __lasx_xvst(/*__m256i*/ _1, /*void **/ _2, /*si12*/ _3) \
+  ((void)__builtin_lasx_xvst ((v32i8)(_1), (void *)(_2), (_3)))
+
+/* Assembly instruction format:	xd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V32QI, CVPOINTER, SI, UQI.  */
+#define __lasx_xvstelm_b(/*__m256i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lasx_xvstelm_b ((v32i8)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	xd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V16HI, CVPOINTER, SI, UQI.  */
+#define __lasx_xvstelm_h(/*__m256i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lasx_xvstelm_h ((v16i16)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	xd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V8SI, CVPOINTER, SI, UQI.  */
+#define __lasx_xvstelm_w(/*__m256i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lasx_xvstelm_w ((v8i32)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	xd, rj, si8, idx.  */
+/* Data types in instruction templates:  VOID, V4DI, CVPOINTER, SI, UQI.  */
+#define __lasx_xvstelm_d(/*__m256i*/ _1, /*void **/ _2, /*si8*/ _3, /*idx*/ _4) \
+  ((void)__builtin_lasx_xvstelm_d ((v4i64)(_1), (void *)(_2), (_3), (_4)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, UQI.  */
+#define __lasx_xvinsve0_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui3*/ _3) \
+  ((__m256i)__builtin_lasx_xvinsve0_w ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui2.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, UQI.  */
+#define __lasx_xvinsve0_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui2*/ _3) \
+  ((__m256i)__builtin_lasx_xvinsve0_d ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvpickve_w(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvpickve_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui2.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvpickve_d(/*__m256i*/ _1, /*ui2*/ _2) \
+  ((__m256i)__builtin_lasx_xvpickve_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrlrn_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrlrn_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrlrn_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrlrn_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrlrn_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrlrn_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrln_b_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrln_b_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrln_h_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrln_h_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvssrln_w_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvssrln_w_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvorn_v (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvorn_v ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, i13.  */
+/* Data types in instruction templates:  V4DI, HI.  */
+#define __lasx_xvldi(/*i13*/ _1) \
+  ((__m256i)__builtin_lasx_xvldi ((_1)))
+
+/* Assembly instruction format:	xd, rj, rk.  */
+/* Data types in instruction templates:  V32QI, CVPOINTER, DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvldx (void * _1, long int _2)
+{
+  return (__m256i)__builtin_lasx_xvldx ((void *)_1, (long int)_2);
+}
+
+/* Assembly instruction format:	xd, rj, rk.  */
+/* Data types in instruction templates:  VOID, V32QI, CVPOINTER, DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+void __lasx_xvstx (__m256i _1, void * _2, long int _3)
+{
+  return (void)__builtin_lasx_xvstx ((v32i8)_1, (void *)_2, (long int)_3);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvextl_qu_du (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvextl_qu_du ((v4u64)_1);
+}
+
+/* Assembly instruction format:	xd, rj, ui3.  */
+/* Data types in instruction templates:  V8SI, V8SI, SI, UQI.  */
+#define __lasx_xvinsgr2vr_w(/*__m256i*/ _1, /*int*/ _2, /*ui3*/ _3) \
+  ((__m256i)__builtin_lasx_xvinsgr2vr_w ((v8i32)(_1), (int)(_2), (_3)))
+
+/* Assembly instruction format:	xd, rj, ui2.  */
+/* Data types in instruction templates:  V4DI, V4DI, DI, UQI.  */
+#define __lasx_xvinsgr2vr_d(/*__m256i*/ _1, /*long int*/ _2, /*ui2*/ _3) \
+  ((__m256i)__builtin_lasx_xvinsgr2vr_d ((v4i64)(_1), (long int)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve0_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvreplve0_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve0_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvreplve0_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve0_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvreplve0_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve0_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvreplve0_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvreplve0_q (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvreplve0_q ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_h_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_h_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_w_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_w_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_d_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_d_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_w_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_w_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_d_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_d_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_d_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_d_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_hu_bu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_hu_bu ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_wu_hu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_wu_hu ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_du_wu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_du_wu ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_wu_bu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_wu_bu ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_du_hu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_du_hu ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_vext2xv_du_bu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_vext2xv_du_bu ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvpermi_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui8*/ _3) \
+  ((__m256i)__builtin_lasx_xvpermi_q ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui8.  */
+/* Data types in instruction templates:  V4DI, V4DI, USI.  */
+#define __lasx_xvpermi_d(/*__m256i*/ _1, /*ui8*/ _2) \
+  ((__m256i)__builtin_lasx_xvpermi_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvperm_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvperm_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, rj, si12.  */
+/* Data types in instruction templates:  V32QI, CVPOINTER, SI.  */
+#define __lasx_xvldrepl_b(/*void **/ _1, /*si12*/ _2) \
+  ((__m256i)__builtin_lasx_xvldrepl_b ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	xd, rj, si11.  */
+/* Data types in instruction templates:  V16HI, CVPOINTER, SI.  */
+#define __lasx_xvldrepl_h(/*void **/ _1, /*si11*/ _2) \
+  ((__m256i)__builtin_lasx_xvldrepl_h ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	xd, rj, si10.  */
+/* Data types in instruction templates:  V8SI, CVPOINTER, SI.  */
+#define __lasx_xvldrepl_w(/*void **/ _1, /*si10*/ _2) \
+  ((__m256i)__builtin_lasx_xvldrepl_w ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	xd, rj, si9.  */
+/* Data types in instruction templates:  V4DI, CVPOINTER, SI.  */
+#define __lasx_xvldrepl_d(/*void **/ _1, /*si9*/ _2) \
+  ((__m256i)__builtin_lasx_xvldrepl_d ((void *)(_1), (_2)))
+
+/* Assembly instruction format:	rd, xj, ui3.  */
+/* Data types in instruction templates:  SI, V8SI, UQI.  */
+#define __lasx_xvpickve2gr_w(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((int)__builtin_lasx_xvpickve2gr_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	rd, xj, ui3.  */
+/* Data types in instruction templates:  USI, V8SI, UQI.  */
+#define __lasx_xvpickve2gr_wu(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((unsigned int)__builtin_lasx_xvpickve2gr_wu ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	rd, xj, ui2.  */
+/* Data types in instruction templates:  DI, V4DI, UQI.  */
+#define __lasx_xvpickve2gr_d(/*__m256i*/ _1, /*ui2*/ _2) \
+  ((long int)__builtin_lasx_xvpickve2gr_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	rd, xj, ui2.  */
+/* Data types in instruction templates:  UDI, V4DI, UQI.  */
+#define __lasx_xvpickve2gr_du(/*__m256i*/ _1, /*ui2*/ _2) \
+  ((unsigned long int)__builtin_lasx_xvpickve2gr_du ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_q_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_q_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_d_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_d_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_w_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_w_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_h_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_h_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_q_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_q_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_d_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_d_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_w_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_w_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwev_h_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwev_h_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_q_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_q_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_d_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_d_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_w_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_w_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_h_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_h_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_q_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_q_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_d_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_d_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_w_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_w_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_h_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_h_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_q_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_q_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_d_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_d_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_w_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_w_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsubwod_h_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsubwod_h_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_d_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_d_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_w_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_w_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_h_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_h_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_q_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_q_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_d_wu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_d_wu ((v8u32)_1, (v8u32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_w_hu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_w_hu ((v16u16)_1, (v16u16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_h_bu (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_h_bu ((v32u8)_1, (v32u8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_d_wu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_d_wu_w ((v8u32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_w_hu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_w_hu_h ((v16u16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_h_bu_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_h_bu_b ((v32u8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_d_wu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_d_wu_w ((v8u32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_w_hu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_w_hu_h ((v16u16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_h_bu_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_h_bu_b ((v32u8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_d_wu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_d_wu_w ((v8u32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_w_hu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_w_hu_h ((v16u16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_h_bu_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_h_bu_b ((v32u8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_d_wu_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_d_wu_w ((v8u32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, UV16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_w_hu_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_w_hu_h ((v16u16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, UV32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_h_bu_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_h_bu_b ((v32u8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhaddw_qu_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhaddw_qu_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_q_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_q_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvhsubw_qu_du (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvhsubw_qu_du ((v4u64)_1, (v4u64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_q_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_q_d ((v4i64)_1, (v4i64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_d_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_d_w ((v4i64)_1, (v8i32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_w_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_w_h ((v8i32)_1, (v16i16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_h_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_h_b ((v16i16)_1, (v32i8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_q_du (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_q_du ((v4u64)_1, (v4u64)_2, (v4u64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_d_wu (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_d_wu ((v4u64)_1, (v8u32)_2, (v8u32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_w_hu (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_w_hu ((v8u32)_1, (v16u16)_2, (v16u16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_h_bu (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_h_bu ((v16u16)_1, (v32u8)_2, (v32u8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_q_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_q_d ((v4i64)_1, (v4i64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_d_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_d_w ((v4i64)_1, (v8i32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_w_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_w_h ((v8i32)_1, (v16i16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_h_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_h_b ((v16i16)_1, (v32i8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_q_du (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_q_du ((v4u64)_1, (v4u64)_2, (v4u64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, UV8SI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_d_wu (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_d_wu ((v4u64)_1, (v8u32)_2, (v8u32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, UV16HI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_w_hu (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_w_hu ((v8u32)_1, (v16u16)_2, (v16u16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, UV32QI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_h_bu (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_h_bu ((v16u16)_1, (v32u8)_2, (v32u8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, UV4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_q_du_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_q_du_d ((v4i64)_1, (v4u64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, UV8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_d_wu_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_d_wu_w ((v4i64)_1, (v8u32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, UV16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_w_hu_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_w_hu_h ((v8i32)_1, (v16u16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, UV32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwev_h_bu_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwev_h_bu_b ((v16i16)_1, (v32u8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, UV4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_q_du_d (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_q_du_d ((v4i64)_1, (v4u64)_2, (v4i64)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, UV8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_d_wu_w (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_d_wu_w ((v4i64)_1, (v8u32)_2, (v8i32)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, UV16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_w_hu_h (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_w_hu_h ((v8i32)_1, (v16u16)_2, (v16i16)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, UV32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmaddwod_h_bu_b (__m256i _1, __m256i _2, __m256i _3)
+{
+  return (__m256i)__builtin_lasx_xvmaddwod_h_bu_b ((v16i16)_1, (v32u8)_2, (v32i8)_3);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvrotr_b (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvrotr_b ((v32i8)_1, (v32i8)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvrotr_h (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvrotr_h ((v16i16)_1, (v16i16)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvrotr_w (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvrotr_w ((v8i32)_1, (v8i32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvrotr_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvrotr_d ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvadd_q (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvadd_q ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvsub_q (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvsub_q ((v4i64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwev_q_du_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwev_q_du_d ((v4u64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvaddwod_q_du_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvaddwod_q_du_d ((v4u64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwev_q_du_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwev_q_du_d ((v4u64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, UV4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmulwod_q_du_d (__m256i _1, __m256i _2)
+{
+  return (__m256i)__builtin_lasx_xvmulwod_q_du_d ((v4u64)_1, (v4i64)_2);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmskgez_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvmskgez_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V32QI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvmsknz_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvmsknz_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V16HI, V32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_h_b (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_h_b ((v32i8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V8SI, V16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_w_h (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_w_h ((v16i16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_d_w (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_d_w ((v8i32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_q_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_q_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV16HI, UV32QI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_hu_bu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_hu_bu ((v32u8)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV8SI, UV16HI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_wu_hu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_wu_hu ((v16u16)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV4DI, UV8SI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_du_wu (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_du_wu ((v8u32)_1);
+}
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  UV4DI, UV4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvexth_qu_du (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvexth_qu_du ((v4u64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V32QI, V32QI, UQI.  */
+#define __lasx_xvrotri_b(/*__m256i*/ _1, /*ui3*/ _2) \
+  ((__m256i)__builtin_lasx_xvrotri_b ((v32i8)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V16HI, V16HI, UQI.  */
+#define __lasx_xvrotri_h(/*__m256i*/ _1, /*ui4*/ _2) \
+  ((__m256i)__builtin_lasx_xvrotri_h ((v16i16)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V8SI, V8SI, UQI.  */
+#define __lasx_xvrotri_w(/*__m256i*/ _1, /*ui5*/ _2) \
+  ((__m256i)__builtin_lasx_xvrotri_w ((v8i32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V4DI, V4DI, UQI.  */
+#define __lasx_xvrotri_d(/*__m256i*/ _1, /*ui6*/ _2) \
+  ((__m256i)__builtin_lasx_xvrotri_d ((v4i64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj.  */
+/* Data types in instruction templates:  V4DI, V4DI.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvextl_q_d (__m256i _1)
+{
+  return (__m256i)__builtin_lasx_xvextl_q_d ((v4i64)_1);
+}
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvsrlni_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlni_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvsrlni_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlni_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvsrlni_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlni_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvsrlni_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlni_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvsrlrni_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlrni_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvsrlrni_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlrni_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvsrlrni_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlrni_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvsrlrni_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrlrni_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvssrlni_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvssrlni_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvssrlni_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvssrlni_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, V32QI, USI.  */
+#define __lasx_xvssrlni_bu_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_bu_h ((v32u8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, V16HI, USI.  */
+#define __lasx_xvssrlni_hu_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_hu_w ((v16u16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, V8SI, USI.  */
+#define __lasx_xvssrlni_wu_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_wu_d ((v8u32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, V4DI, USI.  */
+#define __lasx_xvssrlni_du_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlni_du_q ((v4u64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvssrlrni_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvssrlrni_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvssrlrni_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvssrlrni_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, V32QI, USI.  */
+#define __lasx_xvssrlrni_bu_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_bu_h ((v32u8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, V16HI, USI.  */
+#define __lasx_xvssrlrni_hu_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_hu_w ((v16u16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, V8SI, USI.  */
+#define __lasx_xvssrlrni_wu_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_wu_d ((v8u32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, V4DI, USI.  */
+#define __lasx_xvssrlrni_du_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrlrni_du_q ((v4u64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvsrani_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrani_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvsrani_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrani_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvsrani_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrani_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvsrani_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrani_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvsrarni_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrarni_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvsrarni_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrarni_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvsrarni_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrarni_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvsrarni_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvsrarni_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvssrani_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvssrani_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvssrani_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvssrani_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, V32QI, USI.  */
+#define __lasx_xvssrani_bu_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_bu_h ((v32u8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, V16HI, USI.  */
+#define __lasx_xvssrani_hu_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_hu_w ((v16u16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, V8SI, USI.  */
+#define __lasx_xvssrani_wu_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_wu_d ((v8u32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, V4DI, USI.  */
+#define __lasx_xvssrani_du_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrani_du_q ((v4u64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  V32QI, V32QI, V32QI, USI.  */
+#define __lasx_xvssrarni_b_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_b_h ((v32i8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  V16HI, V16HI, V16HI, USI.  */
+#define __lasx_xvssrarni_h_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_h_w ((v16i16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  V8SI, V8SI, V8SI, USI.  */
+#define __lasx_xvssrarni_w_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_w_d ((v8i32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  V4DI, V4DI, V4DI, USI.  */
+#define __lasx_xvssrarni_d_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_d_q ((v4i64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui4.  */
+/* Data types in instruction templates:  UV32QI, UV32QI, V32QI, USI.  */
+#define __lasx_xvssrarni_bu_h(/*__m256i*/ _1, /*__m256i*/ _2, /*ui4*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_bu_h ((v32u8)(_1), (v32i8)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui5.  */
+/* Data types in instruction templates:  UV16HI, UV16HI, V16HI, USI.  */
+#define __lasx_xvssrarni_hu_w(/*__m256i*/ _1, /*__m256i*/ _2, /*ui5*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_hu_w ((v16u16)(_1), (v16i16)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui6.  */
+/* Data types in instruction templates:  UV8SI, UV8SI, V8SI, USI.  */
+#define __lasx_xvssrarni_wu_d(/*__m256i*/ _1, /*__m256i*/ _2, /*ui6*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_wu_d ((v8u32)(_1), (v8i32)(_2), (_3)))
+
+/* Assembly instruction format:	xd, xj, ui7.  */
+/* Data types in instruction templates:  UV4DI, UV4DI, V4DI, USI.  */
+#define __lasx_xvssrarni_du_q(/*__m256i*/ _1, /*__m256i*/ _2, /*ui7*/ _3) \
+  ((__m256i)__builtin_lasx_xvssrarni_du_q ((v4u64)(_1), (v4i64)(_2), (_3)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV32QI.  */
+#define __lasx_xbnz_b(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbnz_b ((v32u8)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV4DI.  */
+#define __lasx_xbnz_d(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbnz_d ((v4u64)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV16HI.  */
+#define __lasx_xbnz_h(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbnz_h ((v16u16)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV32QI.  */
+#define __lasx_xbnz_v(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbnz_v ((v32u8)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV8SI.  */
+#define __lasx_xbnz_w(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbnz_w ((v8u32)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV32QI.  */
+#define __lasx_xbz_b(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbz_b ((v32u8)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV4DI.  */
+#define __lasx_xbz_d(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbz_d ((v4u64)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV16HI.  */
+#define __lasx_xbz_h(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbz_h ((v16u16)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV32QI.  */
+#define __lasx_xbz_v(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbz_v ((v32u8)(_1)))
+
+/* Assembly instruction format:	cd, xj.  */
+/* Data types in instruction templates:  SI, UV8SI.  */
+#define __lasx_xbz_w(/*__m256i*/ _1) \
+  ((int)__builtin_lasx_xbz_w ((v8u32)(_1)))
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_caf_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_caf_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_caf_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_caf_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_ceq_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_ceq_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_ceq_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_ceq_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cle_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cle_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cle_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cle_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_clt_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_clt_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_clt_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_clt_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cne_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cne_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cne_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cne_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cor_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cor_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cor_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cor_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cueq_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cueq_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cueq_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cueq_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cule_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cule_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cule_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cule_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cult_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cult_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cult_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cult_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cun_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cun_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cune_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cune_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cune_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cune_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_cun_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_cun_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_saf_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_saf_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_saf_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_saf_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_seq_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_seq_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_seq_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_seq_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sle_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sle_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sle_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sle_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_slt_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_slt_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_slt_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_slt_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sne_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sne_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sne_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sne_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sor_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sor_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sor_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sor_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sueq_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sueq_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sueq_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sueq_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sule_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sule_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sule_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sule_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sult_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sult_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sult_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sult_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sun_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sun_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V4DI, V4DF, V4DF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sune_d (__m256d _1, __m256d _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sune_d ((v4f64)_1, (v4f64)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sune_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sune_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, xk.  */
+/* Data types in instruction templates:  V8SI, V8SF, V8SF.  */
+extern __inline __attribute__((__gnu_inline__, __always_inline__, __artificial__))
+__m256i __lasx_xvfcmp_sun_s (__m256 _1, __m256 _2)
+{
+  return (__m256i)__builtin_lasx_xvfcmp_sun_s ((v8f32)_1, (v8f32)_2);
+}
+
+/* Assembly instruction format:	xd, xj, ui2.  */
+/* Data types in instruction templates:  V4DF, V4DF, UQI.  */
+#define __lasx_xvpickve_d_f(/*__m256d*/ _1, /*ui2*/ _2) \
+  ((__m256d)__builtin_lasx_xvpickve_d_f ((v4f64)(_1), (_2)))
+
+/* Assembly instruction format:	xd, xj, ui3.  */
+/* Data types in instruction templates:  V8SF, V8SF, UQI.  */
+#define __lasx_xvpickve_w_f(/*__m256*/ _1, /*ui3*/ _2) \
+  ((__m256)__builtin_lasx_xvpickve_w_f ((v8f32)(_1), (_2)))
+
+/* Assembly instruction format:	xd, si10.  */
+/* Data types in instruction templates:  V32QI, HI.  */
+#define __lasx_xvrepli_b(/*si10*/ _1) \
+  ((__m256i)__builtin_lasx_xvrepli_b ((_1)))
+
+/* Assembly instruction format:	xd, si10.  */
+/* Data types in instruction templates:  V4DI, HI.  */
+#define __lasx_xvrepli_d(/*si10*/ _1) \
+  ((__m256i)__builtin_lasx_xvrepli_d ((_1)))
+
+/* Assembly instruction format:	xd, si10.  */
+/* Data types in instruction templates:  V16HI, HI.  */
+#define __lasx_xvrepli_h(/*si10*/ _1) \
+  ((__m256i)__builtin_lasx_xvrepli_h ((_1)))
+
+/* Assembly instruction format:	xd, si10.  */
+/* Data types in instruction templates:  V8SI, HI.  */
+#define __lasx_xvrepli_w(/*si10*/ _1) \
+  ((__m256i)__builtin_lasx_xvrepli_w ((_1)))
+
+#endif /* defined(__loongarch_asx).  */
+#endif /* _GCC_LOONGSON_ASXINTRIN_H.  */
diff --git a/gcc/config/loongarch/loongarch-builtins.cc b/gcc/config/loongarch/loongarch-builtins.cc
index 5958f5b7fbe..064fee7dfa2 100644
--- a/gcc/config/loongarch/loongarch-builtins.cc
+++ b/gcc/config/loongarch/loongarch-builtins.cc
@@ -74,6 +74,13 @@ enum loongarch_builtin_type
   /* The function corresponds to an LSX conditional branch instruction
      combined with a compare instruction.  */
   LARCH_BUILTIN_LSX_TEST_BRANCH,
+
+  /* For generating LoongArch LASX.  */
+  LARCH_BUILTIN_LASX,
+
+  /* The function corresponds to an LASX conditional branch instruction
+     combined with a compare instruction.  */
+  LARCH_BUILTIN_LASX_TEST_BRANCH,
 };
 
 /* Declare an availability predicate for built-in functions that require
@@ -112,6 +119,7 @@ struct loongarch_builtin_description
 
 AVAIL_ALL (hard_float, TARGET_HARD_FLOAT_ABI)
 AVAIL_ALL (lsx, ISA_HAS_LSX)
+AVAIL_ALL (lasx, ISA_HAS_LASX)
 
 /* Construct a loongarch_builtin_description from the given arguments.
 
@@ -173,6 +181,30 @@ AVAIL_ALL (lsx, ISA_HAS_LSX)
     "__builtin_lsx_" #INSN,  LARCH_BUILTIN_DIRECT_NO_TARGET,		\
     FUNCTION_TYPE, loongarch_builtin_avail_lsx }
 
+/* Define an LASX LARCH_BUILTIN_DIRECT function __builtin_lasx_<INSN>
+   for instruction CODE_FOR_lasx_<INSN>.  FUNCTION_TYPE is a builtin_description
+   field.  */
+#define LASX_BUILTIN(INSN, FUNCTION_TYPE)				\
+  { CODE_FOR_lasx_ ## INSN,						\
+    "__builtin_lasx_" #INSN,  LARCH_BUILTIN_LASX,			\
+    FUNCTION_TYPE, loongarch_builtin_avail_lasx }
+
+/* Define an LASX LARCH_BUILTIN_DIRECT_NO_TARGET function __builtin_lasx_<INSN>
+   for instruction CODE_FOR_lasx_<INSN>.  FUNCTION_TYPE is a builtin_description
+   field.  */
+#define LASX_NO_TARGET_BUILTIN(INSN, FUNCTION_TYPE)			\
+  { CODE_FOR_lasx_ ## INSN,						\
+    "__builtin_lasx_" #INSN,  LARCH_BUILTIN_DIRECT_NO_TARGET,		\
+    FUNCTION_TYPE, loongarch_builtin_avail_lasx }
+
+/* Define an LASX LARCH_BUILTIN_LASX_TEST_BRANCH function __builtin_lasx_<INSN>
+   for instruction CODE_FOR_lasx_<INSN>.  FUNCTION_TYPE is a builtin_description
+   field.  */
+#define LASX_BUILTIN_TEST_BRANCH(INSN, FUNCTION_TYPE)			\
+  { CODE_FOR_lasx_ ## INSN,						\
+    "__builtin_lasx_" #INSN, LARCH_BUILTIN_LASX_TEST_BRANCH,		\
+    FUNCTION_TYPE, loongarch_builtin_avail_lasx }
+
 /* LoongArch SX define CODE_FOR_lsx_xxx */
 #define CODE_FOR_lsx_vsadd_b CODE_FOR_ssaddv16qi3
 #define CODE_FOR_lsx_vsadd_h CODE_FOR_ssaddv8hi3
@@ -442,6 +474,276 @@ AVAIL_ALL (lsx, ISA_HAS_LSX)
 #define CODE_FOR_lsx_vssrlrn_hu_w CODE_FOR_lsx_vssrlrn_u_hu_w
 #define CODE_FOR_lsx_vssrlrn_wu_d CODE_FOR_lsx_vssrlrn_u_wu_d
 
+/* LoongArch ASX define CODE_FOR_lasx_mxxx */
+#define CODE_FOR_lasx_xvsadd_b CODE_FOR_ssaddv32qi3
+#define CODE_FOR_lasx_xvsadd_h CODE_FOR_ssaddv16hi3
+#define CODE_FOR_lasx_xvsadd_w CODE_FOR_ssaddv8si3
+#define CODE_FOR_lasx_xvsadd_d CODE_FOR_ssaddv4di3
+#define CODE_FOR_lasx_xvsadd_bu CODE_FOR_usaddv32qi3
+#define CODE_FOR_lasx_xvsadd_hu CODE_FOR_usaddv16hi3
+#define CODE_FOR_lasx_xvsadd_wu CODE_FOR_usaddv8si3
+#define CODE_FOR_lasx_xvsadd_du CODE_FOR_usaddv4di3
+#define CODE_FOR_lasx_xvadd_b CODE_FOR_addv32qi3
+#define CODE_FOR_lasx_xvadd_h CODE_FOR_addv16hi3
+#define CODE_FOR_lasx_xvadd_w CODE_FOR_addv8si3
+#define CODE_FOR_lasx_xvadd_d CODE_FOR_addv4di3
+#define CODE_FOR_lasx_xvaddi_bu CODE_FOR_addv32qi3
+#define CODE_FOR_lasx_xvaddi_hu CODE_FOR_addv16hi3
+#define CODE_FOR_lasx_xvaddi_wu CODE_FOR_addv8si3
+#define CODE_FOR_lasx_xvaddi_du CODE_FOR_addv4di3
+#define CODE_FOR_lasx_xvand_v CODE_FOR_andv32qi3
+#define CODE_FOR_lasx_xvandi_b CODE_FOR_andv32qi3
+#define CODE_FOR_lasx_xvbitsel_v CODE_FOR_lasx_xvbitsel_b
+#define CODE_FOR_lasx_xvseqi_b CODE_FOR_lasx_xvseq_b
+#define CODE_FOR_lasx_xvseqi_h CODE_FOR_lasx_xvseq_h
+#define CODE_FOR_lasx_xvseqi_w CODE_FOR_lasx_xvseq_w
+#define CODE_FOR_lasx_xvseqi_d CODE_FOR_lasx_xvseq_d
+#define CODE_FOR_lasx_xvslti_b CODE_FOR_lasx_xvslt_b
+#define CODE_FOR_lasx_xvslti_h CODE_FOR_lasx_xvslt_h
+#define CODE_FOR_lasx_xvslti_w CODE_FOR_lasx_xvslt_w
+#define CODE_FOR_lasx_xvslti_d CODE_FOR_lasx_xvslt_d
+#define CODE_FOR_lasx_xvslti_bu CODE_FOR_lasx_xvslt_bu
+#define CODE_FOR_lasx_xvslti_hu CODE_FOR_lasx_xvslt_hu
+#define CODE_FOR_lasx_xvslti_wu CODE_FOR_lasx_xvslt_wu
+#define CODE_FOR_lasx_xvslti_du CODE_FOR_lasx_xvslt_du
+#define CODE_FOR_lasx_xvslei_b CODE_FOR_lasx_xvsle_b
+#define CODE_FOR_lasx_xvslei_h CODE_FOR_lasx_xvsle_h
+#define CODE_FOR_lasx_xvslei_w CODE_FOR_lasx_xvsle_w
+#define CODE_FOR_lasx_xvslei_d CODE_FOR_lasx_xvsle_d
+#define CODE_FOR_lasx_xvslei_bu CODE_FOR_lasx_xvsle_bu
+#define CODE_FOR_lasx_xvslei_hu CODE_FOR_lasx_xvsle_hu
+#define CODE_FOR_lasx_xvslei_wu CODE_FOR_lasx_xvsle_wu
+#define CODE_FOR_lasx_xvslei_du CODE_FOR_lasx_xvsle_du
+#define CODE_FOR_lasx_xvdiv_b CODE_FOR_divv32qi3
+#define CODE_FOR_lasx_xvdiv_h CODE_FOR_divv16hi3
+#define CODE_FOR_lasx_xvdiv_w CODE_FOR_divv8si3
+#define CODE_FOR_lasx_xvdiv_d CODE_FOR_divv4di3
+#define CODE_FOR_lasx_xvdiv_bu CODE_FOR_udivv32qi3
+#define CODE_FOR_lasx_xvdiv_hu CODE_FOR_udivv16hi3
+#define CODE_FOR_lasx_xvdiv_wu CODE_FOR_udivv8si3
+#define CODE_FOR_lasx_xvdiv_du CODE_FOR_udivv4di3
+#define CODE_FOR_lasx_xvfadd_s CODE_FOR_addv8sf3
+#define CODE_FOR_lasx_xvfadd_d CODE_FOR_addv4df3
+#define CODE_FOR_lasx_xvftintrz_w_s CODE_FOR_fix_truncv8sfv8si2
+#define CODE_FOR_lasx_xvftintrz_l_d CODE_FOR_fix_truncv4dfv4di2
+#define CODE_FOR_lasx_xvftintrz_wu_s CODE_FOR_fixuns_truncv8sfv8si2
+#define CODE_FOR_lasx_xvftintrz_lu_d CODE_FOR_fixuns_truncv4dfv4di2
+#define CODE_FOR_lasx_xvffint_s_w CODE_FOR_floatv8siv8sf2
+#define CODE_FOR_lasx_xvffint_d_l CODE_FOR_floatv4div4df2
+#define CODE_FOR_lasx_xvffint_s_wu CODE_FOR_floatunsv8siv8sf2
+#define CODE_FOR_lasx_xvffint_d_lu CODE_FOR_floatunsv4div4df2
+#define CODE_FOR_lasx_xvfsub_s CODE_FOR_subv8sf3
+#define CODE_FOR_lasx_xvfsub_d CODE_FOR_subv4df3
+#define CODE_FOR_lasx_xvfmul_s CODE_FOR_mulv8sf3
+#define CODE_FOR_lasx_xvfmul_d CODE_FOR_mulv4df3
+#define CODE_FOR_lasx_xvfdiv_s CODE_FOR_divv8sf3
+#define CODE_FOR_lasx_xvfdiv_d CODE_FOR_divv4df3
+#define CODE_FOR_lasx_xvfmax_s CODE_FOR_smaxv8sf3
+#define CODE_FOR_lasx_xvfmax_d CODE_FOR_smaxv4df3
+#define CODE_FOR_lasx_xvfmin_s CODE_FOR_sminv8sf3
+#define CODE_FOR_lasx_xvfmin_d CODE_FOR_sminv4df3
+#define CODE_FOR_lasx_xvfsqrt_s CODE_FOR_sqrtv8sf2
+#define CODE_FOR_lasx_xvfsqrt_d CODE_FOR_sqrtv4df2
+#define CODE_FOR_lasx_xvflogb_s CODE_FOR_logbv8sf2
+#define CODE_FOR_lasx_xvflogb_d CODE_FOR_logbv4df2
+#define CODE_FOR_lasx_xvmax_b CODE_FOR_smaxv32qi3
+#define CODE_FOR_lasx_xvmax_h CODE_FOR_smaxv16hi3
+#define CODE_FOR_lasx_xvmax_w CODE_FOR_smaxv8si3
+#define CODE_FOR_lasx_xvmax_d CODE_FOR_smaxv4di3
+#define CODE_FOR_lasx_xvmaxi_b CODE_FOR_smaxv32qi3
+#define CODE_FOR_lasx_xvmaxi_h CODE_FOR_smaxv16hi3
+#define CODE_FOR_lasx_xvmaxi_w CODE_FOR_smaxv8si3
+#define CODE_FOR_lasx_xvmaxi_d CODE_FOR_smaxv4di3
+#define CODE_FOR_lasx_xvmax_bu CODE_FOR_umaxv32qi3
+#define CODE_FOR_lasx_xvmax_hu CODE_FOR_umaxv16hi3
+#define CODE_FOR_lasx_xvmax_wu CODE_FOR_umaxv8si3
+#define CODE_FOR_lasx_xvmax_du CODE_FOR_umaxv4di3
+#define CODE_FOR_lasx_xvmaxi_bu CODE_FOR_umaxv32qi3
+#define CODE_FOR_lasx_xvmaxi_hu CODE_FOR_umaxv16hi3
+#define CODE_FOR_lasx_xvmaxi_wu CODE_FOR_umaxv8si3
+#define CODE_FOR_lasx_xvmaxi_du CODE_FOR_umaxv4di3
+#define CODE_FOR_lasx_xvmin_b CODE_FOR_sminv32qi3
+#define CODE_FOR_lasx_xvmin_h CODE_FOR_sminv16hi3
+#define CODE_FOR_lasx_xvmin_w CODE_FOR_sminv8si3
+#define CODE_FOR_lasx_xvmin_d CODE_FOR_sminv4di3
+#define CODE_FOR_lasx_xvmini_b CODE_FOR_sminv32qi3
+#define CODE_FOR_lasx_xvmini_h CODE_FOR_sminv16hi3
+#define CODE_FOR_lasx_xvmini_w CODE_FOR_sminv8si3
+#define CODE_FOR_lasx_xvmini_d CODE_FOR_sminv4di3
+#define CODE_FOR_lasx_xvmin_bu CODE_FOR_uminv32qi3
+#define CODE_FOR_lasx_xvmin_hu CODE_FOR_uminv16hi3
+#define CODE_FOR_lasx_xvmin_wu CODE_FOR_uminv8si3
+#define CODE_FOR_lasx_xvmin_du CODE_FOR_uminv4di3
+#define CODE_FOR_lasx_xvmini_bu CODE_FOR_uminv32qi3
+#define CODE_FOR_lasx_xvmini_hu CODE_FOR_uminv16hi3
+#define CODE_FOR_lasx_xvmini_wu CODE_FOR_uminv8si3
+#define CODE_FOR_lasx_xvmini_du CODE_FOR_uminv4di3
+#define CODE_FOR_lasx_xvmod_b CODE_FOR_modv32qi3
+#define CODE_FOR_lasx_xvmod_h CODE_FOR_modv16hi3
+#define CODE_FOR_lasx_xvmod_w CODE_FOR_modv8si3
+#define CODE_FOR_lasx_xvmod_d CODE_FOR_modv4di3
+#define CODE_FOR_lasx_xvmod_bu CODE_FOR_umodv32qi3
+#define CODE_FOR_lasx_xvmod_hu CODE_FOR_umodv16hi3
+#define CODE_FOR_lasx_xvmod_wu CODE_FOR_umodv8si3
+#define CODE_FOR_lasx_xvmod_du CODE_FOR_umodv4di3
+#define CODE_FOR_lasx_xvmul_b CODE_FOR_mulv32qi3
+#define CODE_FOR_lasx_xvmul_h CODE_FOR_mulv16hi3
+#define CODE_FOR_lasx_xvmul_w CODE_FOR_mulv8si3
+#define CODE_FOR_lasx_xvmul_d CODE_FOR_mulv4di3
+#define CODE_FOR_lasx_xvclz_b CODE_FOR_clzv32qi2
+#define CODE_FOR_lasx_xvclz_h CODE_FOR_clzv16hi2
+#define CODE_FOR_lasx_xvclz_w CODE_FOR_clzv8si2
+#define CODE_FOR_lasx_xvclz_d CODE_FOR_clzv4di2
+#define CODE_FOR_lasx_xvnor_v CODE_FOR_lasx_xvnor_b
+#define CODE_FOR_lasx_xvor_v CODE_FOR_iorv32qi3
+#define CODE_FOR_lasx_xvori_b CODE_FOR_iorv32qi3
+#define CODE_FOR_lasx_xvnori_b CODE_FOR_lasx_xvnor_b
+#define CODE_FOR_lasx_xvpcnt_b CODE_FOR_popcountv32qi2
+#define CODE_FOR_lasx_xvpcnt_h CODE_FOR_popcountv16hi2
+#define CODE_FOR_lasx_xvpcnt_w CODE_FOR_popcountv8si2
+#define CODE_FOR_lasx_xvpcnt_d CODE_FOR_popcountv4di2
+#define CODE_FOR_lasx_xvxor_v CODE_FOR_xorv32qi3
+#define CODE_FOR_lasx_xvxori_b CODE_FOR_xorv32qi3
+#define CODE_FOR_lasx_xvsll_b CODE_FOR_vashlv32qi3
+#define CODE_FOR_lasx_xvsll_h CODE_FOR_vashlv16hi3
+#define CODE_FOR_lasx_xvsll_w CODE_FOR_vashlv8si3
+#define CODE_FOR_lasx_xvsll_d CODE_FOR_vashlv4di3
+#define CODE_FOR_lasx_xvslli_b CODE_FOR_vashlv32qi3
+#define CODE_FOR_lasx_xvslli_h CODE_FOR_vashlv16hi3
+#define CODE_FOR_lasx_xvslli_w CODE_FOR_vashlv8si3
+#define CODE_FOR_lasx_xvslli_d CODE_FOR_vashlv4di3
+#define CODE_FOR_lasx_xvsra_b CODE_FOR_vashrv32qi3
+#define CODE_FOR_lasx_xvsra_h CODE_FOR_vashrv16hi3
+#define CODE_FOR_lasx_xvsra_w CODE_FOR_vashrv8si3
+#define CODE_FOR_lasx_xvsra_d CODE_FOR_vashrv4di3
+#define CODE_FOR_lasx_xvsrai_b CODE_FOR_vashrv32qi3
+#define CODE_FOR_lasx_xvsrai_h CODE_FOR_vashrv16hi3
+#define CODE_FOR_lasx_xvsrai_w CODE_FOR_vashrv8si3
+#define CODE_FOR_lasx_xvsrai_d CODE_FOR_vashrv4di3
+#define CODE_FOR_lasx_xvsrl_b CODE_FOR_vlshrv32qi3
+#define CODE_FOR_lasx_xvsrl_h CODE_FOR_vlshrv16hi3
+#define CODE_FOR_lasx_xvsrl_w CODE_FOR_vlshrv8si3
+#define CODE_FOR_lasx_xvsrl_d CODE_FOR_vlshrv4di3
+#define CODE_FOR_lasx_xvsrli_b CODE_FOR_vlshrv32qi3
+#define CODE_FOR_lasx_xvsrli_h CODE_FOR_vlshrv16hi3
+#define CODE_FOR_lasx_xvsrli_w CODE_FOR_vlshrv8si3
+#define CODE_FOR_lasx_xvsrli_d CODE_FOR_vlshrv4di3
+#define CODE_FOR_lasx_xvsub_b CODE_FOR_subv32qi3
+#define CODE_FOR_lasx_xvsub_h CODE_FOR_subv16hi3
+#define CODE_FOR_lasx_xvsub_w CODE_FOR_subv8si3
+#define CODE_FOR_lasx_xvsub_d CODE_FOR_subv4di3
+#define CODE_FOR_lasx_xvsubi_bu CODE_FOR_subv32qi3
+#define CODE_FOR_lasx_xvsubi_hu CODE_FOR_subv16hi3
+#define CODE_FOR_lasx_xvsubi_wu CODE_FOR_subv8si3
+#define CODE_FOR_lasx_xvsubi_du CODE_FOR_subv4di3
+#define CODE_FOR_lasx_xvpackod_d CODE_FOR_lasx_xvilvh_d
+#define CODE_FOR_lasx_xvpackev_d CODE_FOR_lasx_xvilvl_d
+#define CODE_FOR_lasx_xvpickod_d CODE_FOR_lasx_xvilvh_d
+#define CODE_FOR_lasx_xvpickev_d CODE_FOR_lasx_xvilvl_d
+#define CODE_FOR_lasx_xvrepli_b CODE_FOR_lasx_xvrepliv32qi
+#define CODE_FOR_lasx_xvrepli_h CODE_FOR_lasx_xvrepliv16hi
+#define CODE_FOR_lasx_xvrepli_w CODE_FOR_lasx_xvrepliv8si
+#define CODE_FOR_lasx_xvrepli_d CODE_FOR_lasx_xvrepliv4di
+
+#define CODE_FOR_lasx_xvandn_v CODE_FOR_xvandnv32qi3
+#define CODE_FOR_lasx_xvorn_v CODE_FOR_xvornv32qi3
+#define CODE_FOR_lasx_xvneg_b CODE_FOR_negv32qi2
+#define CODE_FOR_lasx_xvneg_h CODE_FOR_negv16hi2
+#define CODE_FOR_lasx_xvneg_w CODE_FOR_negv8si2
+#define CODE_FOR_lasx_xvneg_d CODE_FOR_negv4di2
+#define CODE_FOR_lasx_xvbsrl_v CODE_FOR_lasx_xvbsrl_b
+#define CODE_FOR_lasx_xvbsll_v CODE_FOR_lasx_xvbsll_b
+#define CODE_FOR_lasx_xvfmadd_s CODE_FOR_fmav8sf4
+#define CODE_FOR_lasx_xvfmadd_d CODE_FOR_fmav4df4
+#define CODE_FOR_lasx_xvfmsub_s CODE_FOR_fmsv8sf4
+#define CODE_FOR_lasx_xvfmsub_d CODE_FOR_fmsv4df4
+#define CODE_FOR_lasx_xvfnmadd_s CODE_FOR_xvfnmaddv8sf4_nmadd4
+#define CODE_FOR_lasx_xvfnmadd_d CODE_FOR_xvfnmaddv4df4_nmadd4
+#define CODE_FOR_lasx_xvfnmsub_s CODE_FOR_xvfnmsubv8sf4_nmsub4
+#define CODE_FOR_lasx_xvfnmsub_d CODE_FOR_xvfnmsubv4df4_nmsub4
+
+#define CODE_FOR_lasx_xvpermi_q CODE_FOR_lasx_xvpermi_q_v32qi
+#define CODE_FOR_lasx_xvpermi_d CODE_FOR_lasx_xvpermi_d_v4di
+#define CODE_FOR_lasx_xbnz_v CODE_FOR_lasx_xbnz_v_b
+#define CODE_FOR_lasx_xbz_v CODE_FOR_lasx_xbz_v_b
+
+#define CODE_FOR_lasx_xvssub_b CODE_FOR_lasx_xvssub_s_b
+#define CODE_FOR_lasx_xvssub_h CODE_FOR_lasx_xvssub_s_h
+#define CODE_FOR_lasx_xvssub_w CODE_FOR_lasx_xvssub_s_w
+#define CODE_FOR_lasx_xvssub_d CODE_FOR_lasx_xvssub_s_d
+#define CODE_FOR_lasx_xvssub_bu CODE_FOR_lasx_xvssub_u_bu
+#define CODE_FOR_lasx_xvssub_hu CODE_FOR_lasx_xvssub_u_hu
+#define CODE_FOR_lasx_xvssub_wu CODE_FOR_lasx_xvssub_u_wu
+#define CODE_FOR_lasx_xvssub_du CODE_FOR_lasx_xvssub_u_du
+#define CODE_FOR_lasx_xvabsd_b CODE_FOR_lasx_xvabsd_s_b
+#define CODE_FOR_lasx_xvabsd_h CODE_FOR_lasx_xvabsd_s_h
+#define CODE_FOR_lasx_xvabsd_w CODE_FOR_lasx_xvabsd_s_w
+#define CODE_FOR_lasx_xvabsd_d CODE_FOR_lasx_xvabsd_s_d
+#define CODE_FOR_lasx_xvabsd_bu CODE_FOR_lasx_xvabsd_u_bu
+#define CODE_FOR_lasx_xvabsd_hu CODE_FOR_lasx_xvabsd_u_hu
+#define CODE_FOR_lasx_xvabsd_wu CODE_FOR_lasx_xvabsd_u_wu
+#define CODE_FOR_lasx_xvabsd_du CODE_FOR_lasx_xvabsd_u_du
+#define CODE_FOR_lasx_xvavg_b CODE_FOR_lasx_xvavg_s_b
+#define CODE_FOR_lasx_xvavg_h CODE_FOR_lasx_xvavg_s_h
+#define CODE_FOR_lasx_xvavg_w CODE_FOR_lasx_xvavg_s_w
+#define CODE_FOR_lasx_xvavg_d CODE_FOR_lasx_xvavg_s_d
+#define CODE_FOR_lasx_xvavg_bu CODE_FOR_lasx_xvavg_u_bu
+#define CODE_FOR_lasx_xvavg_hu CODE_FOR_lasx_xvavg_u_hu
+#define CODE_FOR_lasx_xvavg_wu CODE_FOR_lasx_xvavg_u_wu
+#define CODE_FOR_lasx_xvavg_du CODE_FOR_lasx_xvavg_u_du
+#define CODE_FOR_lasx_xvavgr_b CODE_FOR_lasx_xvavgr_s_b
+#define CODE_FOR_lasx_xvavgr_h CODE_FOR_lasx_xvavgr_s_h
+#define CODE_FOR_lasx_xvavgr_w CODE_FOR_lasx_xvavgr_s_w
+#define CODE_FOR_lasx_xvavgr_d CODE_FOR_lasx_xvavgr_s_d
+#define CODE_FOR_lasx_xvavgr_bu CODE_FOR_lasx_xvavgr_u_bu
+#define CODE_FOR_lasx_xvavgr_hu CODE_FOR_lasx_xvavgr_u_hu
+#define CODE_FOR_lasx_xvavgr_wu CODE_FOR_lasx_xvavgr_u_wu
+#define CODE_FOR_lasx_xvavgr_du CODE_FOR_lasx_xvavgr_u_du
+#define CODE_FOR_lasx_xvmuh_b CODE_FOR_lasx_xvmuh_s_b
+#define CODE_FOR_lasx_xvmuh_h CODE_FOR_lasx_xvmuh_s_h
+#define CODE_FOR_lasx_xvmuh_w CODE_FOR_lasx_xvmuh_s_w
+#define CODE_FOR_lasx_xvmuh_d CODE_FOR_lasx_xvmuh_s_d
+#define CODE_FOR_lasx_xvmuh_bu CODE_FOR_lasx_xvmuh_u_bu
+#define CODE_FOR_lasx_xvmuh_hu CODE_FOR_lasx_xvmuh_u_hu
+#define CODE_FOR_lasx_xvmuh_wu CODE_FOR_lasx_xvmuh_u_wu
+#define CODE_FOR_lasx_xvmuh_du CODE_FOR_lasx_xvmuh_u_du
+#define CODE_FOR_lasx_xvssran_b_h CODE_FOR_lasx_xvssran_s_b_h
+#define CODE_FOR_lasx_xvssran_h_w CODE_FOR_lasx_xvssran_s_h_w
+#define CODE_FOR_lasx_xvssran_w_d CODE_FOR_lasx_xvssran_s_w_d
+#define CODE_FOR_lasx_xvssran_bu_h CODE_FOR_lasx_xvssran_u_bu_h
+#define CODE_FOR_lasx_xvssran_hu_w CODE_FOR_lasx_xvssran_u_hu_w
+#define CODE_FOR_lasx_xvssran_wu_d CODE_FOR_lasx_xvssran_u_wu_d
+#define CODE_FOR_lasx_xvssrarn_b_h CODE_FOR_lasx_xvssrarn_s_b_h
+#define CODE_FOR_lasx_xvssrarn_h_w CODE_FOR_lasx_xvssrarn_s_h_w
+#define CODE_FOR_lasx_xvssrarn_w_d CODE_FOR_lasx_xvssrarn_s_w_d
+#define CODE_FOR_lasx_xvssrarn_bu_h CODE_FOR_lasx_xvssrarn_u_bu_h
+#define CODE_FOR_lasx_xvssrarn_hu_w CODE_FOR_lasx_xvssrarn_u_hu_w
+#define CODE_FOR_lasx_xvssrarn_wu_d CODE_FOR_lasx_xvssrarn_u_wu_d
+#define CODE_FOR_lasx_xvssrln_bu_h CODE_FOR_lasx_xvssrln_u_bu_h
+#define CODE_FOR_lasx_xvssrln_hu_w CODE_FOR_lasx_xvssrln_u_hu_w
+#define CODE_FOR_lasx_xvssrln_wu_d CODE_FOR_lasx_xvssrln_u_wu_d
+#define CODE_FOR_lasx_xvssrlrn_bu_h CODE_FOR_lasx_xvssrlrn_u_bu_h
+#define CODE_FOR_lasx_xvssrlrn_hu_w CODE_FOR_lasx_xvssrlrn_u_hu_w
+#define CODE_FOR_lasx_xvssrlrn_wu_d CODE_FOR_lasx_xvssrlrn_u_wu_d
+#define CODE_FOR_lasx_xvftint_w_s CODE_FOR_lasx_xvftint_s_w_s
+#define CODE_FOR_lasx_xvftint_l_d CODE_FOR_lasx_xvftint_s_l_d
+#define CODE_FOR_lasx_xvftint_wu_s CODE_FOR_lasx_xvftint_u_wu_s
+#define CODE_FOR_lasx_xvftint_lu_d CODE_FOR_lasx_xvftint_u_lu_d
+#define CODE_FOR_lasx_xvsllwil_h_b CODE_FOR_lasx_xvsllwil_s_h_b
+#define CODE_FOR_lasx_xvsllwil_w_h CODE_FOR_lasx_xvsllwil_s_w_h
+#define CODE_FOR_lasx_xvsllwil_d_w CODE_FOR_lasx_xvsllwil_s_d_w
+#define CODE_FOR_lasx_xvsllwil_hu_bu CODE_FOR_lasx_xvsllwil_u_hu_bu
+#define CODE_FOR_lasx_xvsllwil_wu_hu CODE_FOR_lasx_xvsllwil_u_wu_hu
+#define CODE_FOR_lasx_xvsllwil_du_wu CODE_FOR_lasx_xvsllwil_u_du_wu
+#define CODE_FOR_lasx_xvsat_b CODE_FOR_lasx_xvsat_s_b
+#define CODE_FOR_lasx_xvsat_h CODE_FOR_lasx_xvsat_s_h
+#define CODE_FOR_lasx_xvsat_w CODE_FOR_lasx_xvsat_s_w
+#define CODE_FOR_lasx_xvsat_d CODE_FOR_lasx_xvsat_s_d
+#define CODE_FOR_lasx_xvsat_bu CODE_FOR_lasx_xvsat_u_bu
+#define CODE_FOR_lasx_xvsat_hu CODE_FOR_lasx_xvsat_u_hu
+#define CODE_FOR_lasx_xvsat_wu CODE_FOR_lasx_xvsat_u_wu
+#define CODE_FOR_lasx_xvsat_du CODE_FOR_lasx_xvsat_u_du
+
 static const struct loongarch_builtin_description loongarch_builtins[] = {
 #define LARCH_MOVFCSR2GR 0
   DIRECT_BUILTIN (movfcsr2gr, LARCH_USI_FTYPE_UQI, hard_float),
@@ -1209,7 +1511,761 @@ static const struct loongarch_builtin_description loongarch_builtins[] = {
   LSX_BUILTIN (vshuf_b, LARCH_V16QI_FTYPE_V16QI_V16QI_V16QI),
   LSX_BUILTIN (vldx, LARCH_V16QI_FTYPE_CVPOINTER_DI),
   LSX_NO_TARGET_BUILTIN (vstx, LARCH_VOID_FTYPE_V16QI_CVPOINTER_DI),
-  LSX_BUILTIN (vextl_qu_du, LARCH_UV2DI_FTYPE_UV2DI)
+  LSX_BUILTIN (vextl_qu_du, LARCH_UV2DI_FTYPE_UV2DI),
+
+  /* Built-in functions for LASX */
+  LASX_BUILTIN (xvsll_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsll_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsll_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsll_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvslli_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvslli_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvslli_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvslli_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvsra_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsra_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsra_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsra_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsrai_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsrai_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsrai_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsrai_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvsrar_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsrar_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsrar_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsrar_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsrari_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsrari_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsrari_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsrari_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvsrl_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsrl_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsrl_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsrl_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsrli_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsrli_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsrli_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsrli_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvsrlr_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsrlr_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsrlr_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsrlr_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsrlri_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsrlri_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsrlri_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsrlri_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvbitclr_b, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvbitclr_h, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvbitclr_w, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvbitclr_d, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvbitclri_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvbitclri_h, LARCH_UV16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvbitclri_w, LARCH_UV8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvbitclri_d, LARCH_UV4DI_FTYPE_UV4DI_UQI),
+  LASX_BUILTIN (xvbitset_b, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvbitset_h, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvbitset_w, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvbitset_d, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvbitseti_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvbitseti_h, LARCH_UV16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvbitseti_w, LARCH_UV8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvbitseti_d, LARCH_UV4DI_FTYPE_UV4DI_UQI),
+  LASX_BUILTIN (xvbitrev_b, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvbitrev_h, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvbitrev_w, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvbitrev_d, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvbitrevi_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvbitrevi_h, LARCH_UV16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvbitrevi_w, LARCH_UV8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvbitrevi_d, LARCH_UV4DI_FTYPE_UV4DI_UQI),
+  LASX_BUILTIN (xvadd_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvadd_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvadd_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvadd_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvaddi_bu, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvaddi_hu, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvaddi_wu, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvaddi_du, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvsub_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsub_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsub_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsub_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsubi_bu, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsubi_hu, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsubi_wu, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsubi_du, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvmax_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmax_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmax_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmax_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmaxi_b, LARCH_V32QI_FTYPE_V32QI_QI),
+  LASX_BUILTIN (xvmaxi_h, LARCH_V16HI_FTYPE_V16HI_QI),
+  LASX_BUILTIN (xvmaxi_w, LARCH_V8SI_FTYPE_V8SI_QI),
+  LASX_BUILTIN (xvmaxi_d, LARCH_V4DI_FTYPE_V4DI_QI),
+  LASX_BUILTIN (xvmax_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmax_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmax_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmax_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmaxi_bu, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvmaxi_hu, LARCH_UV16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvmaxi_wu, LARCH_UV8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvmaxi_du, LARCH_UV4DI_FTYPE_UV4DI_UQI),
+  LASX_BUILTIN (xvmin_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmin_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmin_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmin_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmini_b, LARCH_V32QI_FTYPE_V32QI_QI),
+  LASX_BUILTIN (xvmini_h, LARCH_V16HI_FTYPE_V16HI_QI),
+  LASX_BUILTIN (xvmini_w, LARCH_V8SI_FTYPE_V8SI_QI),
+  LASX_BUILTIN (xvmini_d, LARCH_V4DI_FTYPE_V4DI_QI),
+  LASX_BUILTIN (xvmin_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmin_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmin_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmin_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmini_bu, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvmini_hu, LARCH_UV16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvmini_wu, LARCH_UV8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvmini_du, LARCH_UV4DI_FTYPE_UV4DI_UQI),
+  LASX_BUILTIN (xvseq_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvseq_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvseq_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvseq_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvseqi_b, LARCH_V32QI_FTYPE_V32QI_QI),
+  LASX_BUILTIN (xvseqi_h, LARCH_V16HI_FTYPE_V16HI_QI),
+  LASX_BUILTIN (xvseqi_w, LARCH_V8SI_FTYPE_V8SI_QI),
+  LASX_BUILTIN (xvseqi_d, LARCH_V4DI_FTYPE_V4DI_QI),
+  LASX_BUILTIN (xvslt_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvslt_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvslt_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvslt_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvslti_b, LARCH_V32QI_FTYPE_V32QI_QI),
+  LASX_BUILTIN (xvslti_h, LARCH_V16HI_FTYPE_V16HI_QI),
+  LASX_BUILTIN (xvslti_w, LARCH_V8SI_FTYPE_V8SI_QI),
+  LASX_BUILTIN (xvslti_d, LARCH_V4DI_FTYPE_V4DI_QI),
+  LASX_BUILTIN (xvslt_bu, LARCH_V32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvslt_hu, LARCH_V16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvslt_wu, LARCH_V8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvslt_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvslti_bu, LARCH_V32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvslti_hu, LARCH_V16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvslti_wu, LARCH_V8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvslti_du, LARCH_V4DI_FTYPE_UV4DI_UQI),
+  LASX_BUILTIN (xvsle_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsle_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsle_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsle_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvslei_b, LARCH_V32QI_FTYPE_V32QI_QI),
+  LASX_BUILTIN (xvslei_h, LARCH_V16HI_FTYPE_V16HI_QI),
+  LASX_BUILTIN (xvslei_w, LARCH_V8SI_FTYPE_V8SI_QI),
+  LASX_BUILTIN (xvslei_d, LARCH_V4DI_FTYPE_V4DI_QI),
+  LASX_BUILTIN (xvsle_bu, LARCH_V32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvsle_hu, LARCH_V16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvsle_wu, LARCH_V8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvsle_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvslei_bu, LARCH_V32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvslei_hu, LARCH_V16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvslei_wu, LARCH_V8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvslei_du, LARCH_V4DI_FTYPE_UV4DI_UQI),
+
+  LASX_BUILTIN (xvsat_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsat_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsat_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsat_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvsat_bu, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvsat_hu, LARCH_UV16HI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvsat_wu, LARCH_UV8SI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvsat_du, LARCH_UV4DI_FTYPE_UV4DI_UQI),
+
+  LASX_BUILTIN (xvadda_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvadda_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvadda_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvadda_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsadd_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsadd_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsadd_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsadd_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsadd_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvsadd_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvsadd_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvsadd_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+
+  LASX_BUILTIN (xvavg_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvavg_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvavg_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvavg_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvavg_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvavg_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvavg_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvavg_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+
+  LASX_BUILTIN (xvavgr_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvavgr_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvavgr_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvavgr_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvavgr_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvavgr_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvavgr_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvavgr_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+
+  LASX_BUILTIN (xvssub_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvssub_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvssub_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvssub_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssub_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvssub_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvssub_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvssub_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvabsd_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvabsd_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvabsd_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvabsd_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvabsd_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvabsd_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvabsd_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvabsd_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+
+  LASX_BUILTIN (xvmul_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmul_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmul_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmul_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmadd_b, LARCH_V32QI_FTYPE_V32QI_V32QI_V32QI),
+  LASX_BUILTIN (xvmadd_h, LARCH_V16HI_FTYPE_V16HI_V16HI_V16HI),
+  LASX_BUILTIN (xvmadd_w, LARCH_V8SI_FTYPE_V8SI_V8SI_V8SI),
+  LASX_BUILTIN (xvmadd_d, LARCH_V4DI_FTYPE_V4DI_V4DI_V4DI),
+  LASX_BUILTIN (xvmsub_b, LARCH_V32QI_FTYPE_V32QI_V32QI_V32QI),
+  LASX_BUILTIN (xvmsub_h, LARCH_V16HI_FTYPE_V16HI_V16HI_V16HI),
+  LASX_BUILTIN (xvmsub_w, LARCH_V8SI_FTYPE_V8SI_V8SI_V8SI),
+  LASX_BUILTIN (xvmsub_d, LARCH_V4DI_FTYPE_V4DI_V4DI_V4DI),
+  LASX_BUILTIN (xvdiv_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvdiv_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvdiv_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvdiv_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvdiv_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvdiv_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvdiv_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvdiv_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvhaddw_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvhaddw_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvhaddw_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvhaddw_hu_bu, LARCH_UV16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvhaddw_wu_hu, LARCH_UV8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvhaddw_du_wu, LARCH_UV4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvhsubw_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvhsubw_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvhsubw_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvhsubw_hu_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvhsubw_wu_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvhsubw_du_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmod_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmod_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmod_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmod_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmod_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmod_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmod_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmod_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+
+  LASX_BUILTIN (xvrepl128vei_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvrepl128vei_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvrepl128vei_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvrepl128vei_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvpickev_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvpickev_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvpickev_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvpickev_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvpickod_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvpickod_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvpickod_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvpickod_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvilvh_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvilvh_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvilvh_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvilvh_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvilvl_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvilvl_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvilvl_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvilvl_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvpackev_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvpackev_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvpackev_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvpackev_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvpackod_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvpackod_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvpackod_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvpackod_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvshuf_b, LARCH_V32QI_FTYPE_V32QI_V32QI_V32QI),
+  LASX_BUILTIN (xvshuf_h, LARCH_V16HI_FTYPE_V16HI_V16HI_V16HI),
+  LASX_BUILTIN (xvshuf_w, LARCH_V8SI_FTYPE_V8SI_V8SI_V8SI),
+  LASX_BUILTIN (xvshuf_d, LARCH_V4DI_FTYPE_V4DI_V4DI_V4DI),
+  LASX_BUILTIN (xvand_v, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvandi_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvor_v, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvori_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvnor_v, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvnori_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvxor_v, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvxori_b, LARCH_UV32QI_FTYPE_UV32QI_UQI),
+  LASX_BUILTIN (xvbitsel_v, LARCH_UV32QI_FTYPE_UV32QI_UV32QI_UV32QI),
+  LASX_BUILTIN (xvbitseli_b, LARCH_UV32QI_FTYPE_UV32QI_UV32QI_USI),
+
+  LASX_BUILTIN (xvshuf4i_b, LARCH_V32QI_FTYPE_V32QI_USI),
+  LASX_BUILTIN (xvshuf4i_h, LARCH_V16HI_FTYPE_V16HI_USI),
+  LASX_BUILTIN (xvshuf4i_w, LARCH_V8SI_FTYPE_V8SI_USI),
+
+  LASX_BUILTIN (xvreplgr2vr_b, LARCH_V32QI_FTYPE_SI),
+  LASX_BUILTIN (xvreplgr2vr_h, LARCH_V16HI_FTYPE_SI),
+  LASX_BUILTIN (xvreplgr2vr_w, LARCH_V8SI_FTYPE_SI),
+  LASX_BUILTIN (xvreplgr2vr_d, LARCH_V4DI_FTYPE_DI),
+  LASX_BUILTIN (xvpcnt_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvpcnt_h, LARCH_V16HI_FTYPE_V16HI),
+  LASX_BUILTIN (xvpcnt_w, LARCH_V8SI_FTYPE_V8SI),
+  LASX_BUILTIN (xvpcnt_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvclo_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvclo_h, LARCH_V16HI_FTYPE_V16HI),
+  LASX_BUILTIN (xvclo_w, LARCH_V8SI_FTYPE_V8SI),
+  LASX_BUILTIN (xvclo_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvclz_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvclz_h, LARCH_V16HI_FTYPE_V16HI),
+  LASX_BUILTIN (xvclz_w, LARCH_V8SI_FTYPE_V8SI),
+  LASX_BUILTIN (xvclz_d, LARCH_V4DI_FTYPE_V4DI),
+
+  LASX_BUILTIN (xvrepli_b, LARCH_V32QI_FTYPE_HI),
+  LASX_BUILTIN (xvrepli_h, LARCH_V16HI_FTYPE_HI),
+  LASX_BUILTIN (xvrepli_w, LARCH_V8SI_FTYPE_HI),
+  LASX_BUILTIN (xvrepli_d, LARCH_V4DI_FTYPE_HI),
+  LASX_BUILTIN (xvfcmp_caf_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_caf_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cor_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cor_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cun_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cun_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cune_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cune_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cueq_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cueq_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_ceq_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_ceq_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cne_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cne_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_clt_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_clt_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cult_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cult_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cle_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cle_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_cule_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_cule_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_saf_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_saf_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sor_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sor_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sun_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sun_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sune_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sune_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sueq_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sueq_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_seq_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_seq_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sne_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sne_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_slt_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_slt_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sult_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sult_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sle_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sle_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcmp_sule_s, LARCH_V8SI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcmp_sule_d, LARCH_V4DI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfadd_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfadd_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfsub_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfsub_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfmul_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfmul_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfdiv_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfdiv_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfcvt_h_s, LARCH_V16HI_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfcvt_s_d, LARCH_V8SF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfmin_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfmin_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfmina_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfmina_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfmax_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfmax_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfmaxa_s, LARCH_V8SF_FTYPE_V8SF_V8SF),
+  LASX_BUILTIN (xvfmaxa_d, LARCH_V4DF_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvfclass_s, LARCH_V8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvfclass_d, LARCH_V4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvfsqrt_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfsqrt_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfrecip_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrecip_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfrint_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrint_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfrsqrt_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrsqrt_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvflogb_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvflogb_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfcvth_s_h, LARCH_V8SF_FTYPE_V16HI),
+  LASX_BUILTIN (xvfcvth_d_s, LARCH_V4DF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfcvtl_s_h, LARCH_V8SF_FTYPE_V16HI),
+  LASX_BUILTIN (xvfcvtl_d_s, LARCH_V4DF_FTYPE_V8SF),
+  LASX_BUILTIN (xvftint_w_s, LARCH_V8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftint_l_d, LARCH_V4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvftint_wu_s, LARCH_UV8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftint_lu_d, LARCH_UV4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvftintrz_w_s, LARCH_V8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrz_l_d, LARCH_V4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvftintrz_wu_s, LARCH_UV8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrz_lu_d, LARCH_UV4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvffint_s_w, LARCH_V8SF_FTYPE_V8SI),
+  LASX_BUILTIN (xvffint_d_l, LARCH_V4DF_FTYPE_V4DI),
+  LASX_BUILTIN (xvffint_s_wu, LARCH_V8SF_FTYPE_UV8SI),
+  LASX_BUILTIN (xvffint_d_lu, LARCH_V4DF_FTYPE_UV4DI),
+
+  LASX_BUILTIN (xvreplve_b, LARCH_V32QI_FTYPE_V32QI_SI),
+  LASX_BUILTIN (xvreplve_h, LARCH_V16HI_FTYPE_V16HI_SI),
+  LASX_BUILTIN (xvreplve_w, LARCH_V8SI_FTYPE_V8SI_SI),
+  LASX_BUILTIN (xvreplve_d, LARCH_V4DI_FTYPE_V4DI_SI),
+  LASX_BUILTIN (xvpermi_w, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+
+  LASX_BUILTIN (xvandn_v, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvneg_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvneg_h, LARCH_V16HI_FTYPE_V16HI),
+  LASX_BUILTIN (xvneg_w, LARCH_V8SI_FTYPE_V8SI),
+  LASX_BUILTIN (xvneg_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvmuh_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmuh_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmuh_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmuh_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmuh_bu, LARCH_UV32QI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmuh_hu, LARCH_UV16HI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmuh_wu, LARCH_UV8SI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmuh_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvsllwil_h_b, LARCH_V16HI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvsllwil_w_h, LARCH_V8SI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvsllwil_d_w, LARCH_V4DI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvsllwil_hu_bu, LARCH_UV16HI_FTYPE_UV32QI_UQI), /* FIXME: U? */
+  LASX_BUILTIN (xvsllwil_wu_hu, LARCH_UV8SI_FTYPE_UV16HI_UQI),
+  LASX_BUILTIN (xvsllwil_du_wu, LARCH_UV4DI_FTYPE_UV8SI_UQI),
+  LASX_BUILTIN (xvsran_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsran_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsran_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssran_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvssran_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvssran_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssran_bu_h, LARCH_UV32QI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvssran_hu_w, LARCH_UV16HI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvssran_wu_d, LARCH_UV8SI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvsrarn_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsrarn_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsrarn_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssrarn_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvssrarn_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvssrarn_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssrarn_bu_h, LARCH_UV32QI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvssrarn_hu_w, LARCH_UV16HI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvssrarn_wu_d, LARCH_UV8SI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvsrln_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsrln_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsrln_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssrln_bu_h, LARCH_UV32QI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvssrln_hu_w, LARCH_UV16HI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvssrln_wu_d, LARCH_UV8SI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvsrlrn_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsrlrn_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsrlrn_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssrlrn_bu_h, LARCH_UV32QI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvssrlrn_hu_w, LARCH_UV16HI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvssrlrn_wu_d, LARCH_UV8SI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvfrstpi_b, LARCH_V32QI_FTYPE_V32QI_V32QI_UQI),
+  LASX_BUILTIN (xvfrstpi_h, LARCH_V16HI_FTYPE_V16HI_V16HI_UQI),
+  LASX_BUILTIN (xvfrstp_b, LARCH_V32QI_FTYPE_V32QI_V32QI_V32QI),
+  LASX_BUILTIN (xvfrstp_h, LARCH_V16HI_FTYPE_V16HI_V16HI_V16HI),
+  LASX_BUILTIN (xvshuf4i_d, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvbsrl_v, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvbsll_v, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvextrins_b, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvextrins_h, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvextrins_w, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvextrins_d, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvmskltz_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvmskltz_h, LARCH_V16HI_FTYPE_V16HI),
+  LASX_BUILTIN (xvmskltz_w, LARCH_V8SI_FTYPE_V8SI),
+  LASX_BUILTIN (xvmskltz_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvsigncov_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsigncov_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsigncov_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsigncov_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvfmadd_s, LARCH_V8SF_FTYPE_V8SF_V8SF_V8SF),
+  LASX_BUILTIN (xvfmadd_d, LARCH_V4DF_FTYPE_V4DF_V4DF_V4DF),
+  LASX_BUILTIN (xvfmsub_s, LARCH_V8SF_FTYPE_V8SF_V8SF_V8SF),
+  LASX_BUILTIN (xvfmsub_d, LARCH_V4DF_FTYPE_V4DF_V4DF_V4DF),
+  LASX_BUILTIN (xvfnmadd_s, LARCH_V8SF_FTYPE_V8SF_V8SF_V8SF),
+  LASX_BUILTIN (xvfnmadd_d, LARCH_V4DF_FTYPE_V4DF_V4DF_V4DF),
+  LASX_BUILTIN (xvfnmsub_s, LARCH_V8SF_FTYPE_V8SF_V8SF_V8SF),
+  LASX_BUILTIN (xvfnmsub_d, LARCH_V4DF_FTYPE_V4DF_V4DF_V4DF),
+  LASX_BUILTIN (xvftintrne_w_s, LARCH_V8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrne_l_d, LARCH_V4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvftintrp_w_s, LARCH_V8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrp_l_d, LARCH_V4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvftintrm_w_s, LARCH_V8SI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrm_l_d, LARCH_V4DI_FTYPE_V4DF),
+  LASX_BUILTIN (xvftint_w_d, LARCH_V8SI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvffint_s_l, LARCH_V8SF_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvftintrz_w_d, LARCH_V8SI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvftintrp_w_d, LARCH_V8SI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvftintrm_w_d, LARCH_V8SI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvftintrne_w_d, LARCH_V8SI_FTYPE_V4DF_V4DF),
+  LASX_BUILTIN (xvftinth_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintl_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvffinth_d_w, LARCH_V4DF_FTYPE_V8SI),
+  LASX_BUILTIN (xvffintl_d_w, LARCH_V4DF_FTYPE_V8SI),
+  LASX_BUILTIN (xvftintrzh_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrzl_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrph_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrpl_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrmh_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrml_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrneh_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvftintrnel_l_s, LARCH_V4DI_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrintrne_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrintrne_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfrintrz_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrintrz_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfrintrp_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrintrp_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvfrintrm_s, LARCH_V8SF_FTYPE_V8SF),
+  LASX_BUILTIN (xvfrintrm_d, LARCH_V4DF_FTYPE_V4DF),
+  LASX_BUILTIN (xvld, LARCH_V32QI_FTYPE_CVPOINTER_SI),
+  LASX_NO_TARGET_BUILTIN (xvst, LARCH_VOID_FTYPE_V32QI_CVPOINTER_SI),
+  LASX_NO_TARGET_BUILTIN (xvstelm_b, LARCH_VOID_FTYPE_V32QI_CVPOINTER_SI_UQI),
+  LASX_NO_TARGET_BUILTIN (xvstelm_h, LARCH_VOID_FTYPE_V16HI_CVPOINTER_SI_UQI),
+  LASX_NO_TARGET_BUILTIN (xvstelm_w, LARCH_VOID_FTYPE_V8SI_CVPOINTER_SI_UQI),
+  LASX_NO_TARGET_BUILTIN (xvstelm_d, LARCH_VOID_FTYPE_V4DI_CVPOINTER_SI_UQI),
+  LASX_BUILTIN (xvinsve0_w, LARCH_V8SI_FTYPE_V8SI_V8SI_UQI),
+  LASX_BUILTIN (xvinsve0_d, LARCH_V4DI_FTYPE_V4DI_V4DI_UQI),
+  LASX_BUILTIN (xvpickve_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvpickve_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvpickve_w_f, LARCH_V8SF_FTYPE_V8SF_UQI),
+  LASX_BUILTIN (xvpickve_d_f, LARCH_V4DF_FTYPE_V4DF_UQI),
+  LASX_BUILTIN (xvssrlrn_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvssrlrn_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvssrlrn_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvssrln_b_h, LARCH_V32QI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvssrln_h_w, LARCH_V16HI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvssrln_w_d, LARCH_V8SI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvorn_v, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvldi, LARCH_V4DI_FTYPE_HI),
+  LASX_BUILTIN (xvldx, LARCH_V32QI_FTYPE_CVPOINTER_DI),
+  LASX_NO_TARGET_BUILTIN (xvstx, LARCH_VOID_FTYPE_V32QI_CVPOINTER_DI),
+  LASX_BUILTIN (xvextl_qu_du, LARCH_UV4DI_FTYPE_UV4DI),
+
+  /* LASX */
+  LASX_BUILTIN (xvinsgr2vr_w, LARCH_V8SI_FTYPE_V8SI_SI_UQI),
+  LASX_BUILTIN (xvinsgr2vr_d, LARCH_V4DI_FTYPE_V4DI_DI_UQI),
+
+  LASX_BUILTIN (xvreplve0_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvreplve0_h, LARCH_V16HI_FTYPE_V16HI),
+  LASX_BUILTIN (xvreplve0_w, LARCH_V8SI_FTYPE_V8SI),
+  LASX_BUILTIN (xvreplve0_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvreplve0_q, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (vext2xv_h_b, LARCH_V16HI_FTYPE_V32QI),
+  LASX_BUILTIN (vext2xv_w_h, LARCH_V8SI_FTYPE_V16HI),
+  LASX_BUILTIN (vext2xv_d_w, LARCH_V4DI_FTYPE_V8SI),
+  LASX_BUILTIN (vext2xv_w_b, LARCH_V8SI_FTYPE_V32QI),
+  LASX_BUILTIN (vext2xv_d_h, LARCH_V4DI_FTYPE_V16HI),
+  LASX_BUILTIN (vext2xv_d_b, LARCH_V4DI_FTYPE_V32QI),
+  LASX_BUILTIN (vext2xv_hu_bu, LARCH_V16HI_FTYPE_V32QI),
+  LASX_BUILTIN (vext2xv_wu_hu, LARCH_V8SI_FTYPE_V16HI),
+  LASX_BUILTIN (vext2xv_du_wu, LARCH_V4DI_FTYPE_V8SI),
+  LASX_BUILTIN (vext2xv_wu_bu, LARCH_V8SI_FTYPE_V32QI),
+  LASX_BUILTIN (vext2xv_du_hu, LARCH_V4DI_FTYPE_V16HI),
+  LASX_BUILTIN (vext2xv_du_bu, LARCH_V4DI_FTYPE_V32QI),
+  LASX_BUILTIN (xvpermi_q, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvpermi_d, LARCH_V4DI_FTYPE_V4DI_USI),
+  LASX_BUILTIN (xvperm_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN_TEST_BRANCH (xbz_b, LARCH_SI_FTYPE_UV32QI),
+  LASX_BUILTIN_TEST_BRANCH (xbz_h, LARCH_SI_FTYPE_UV16HI),
+  LASX_BUILTIN_TEST_BRANCH (xbz_w, LARCH_SI_FTYPE_UV8SI),
+  LASX_BUILTIN_TEST_BRANCH (xbz_d, LARCH_SI_FTYPE_UV4DI),
+  LASX_BUILTIN_TEST_BRANCH (xbnz_b, LARCH_SI_FTYPE_UV32QI),
+  LASX_BUILTIN_TEST_BRANCH (xbnz_h, LARCH_SI_FTYPE_UV16HI),
+  LASX_BUILTIN_TEST_BRANCH (xbnz_w, LARCH_SI_FTYPE_UV8SI),
+  LASX_BUILTIN_TEST_BRANCH (xbnz_d, LARCH_SI_FTYPE_UV4DI),
+  LASX_BUILTIN_TEST_BRANCH (xbz_v, LARCH_SI_FTYPE_UV32QI),
+  LASX_BUILTIN_TEST_BRANCH (xbnz_v, LARCH_SI_FTYPE_UV32QI),
+  LASX_BUILTIN (xvldrepl_b, LARCH_V32QI_FTYPE_CVPOINTER_SI),
+  LASX_BUILTIN (xvldrepl_h, LARCH_V16HI_FTYPE_CVPOINTER_SI),
+  LASX_BUILTIN (xvldrepl_w, LARCH_V8SI_FTYPE_CVPOINTER_SI),
+  LASX_BUILTIN (xvldrepl_d, LARCH_V4DI_FTYPE_CVPOINTER_SI),
+  LASX_BUILTIN (xvpickve2gr_w, LARCH_SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvpickve2gr_wu, LARCH_USI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvpickve2gr_d, LARCH_DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvpickve2gr_du, LARCH_UDI_FTYPE_V4DI_UQI),
+
+  LASX_BUILTIN (xvaddwev_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvaddwev_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvaddwev_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvaddwev_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvaddwev_q_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvaddwev_d_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvaddwev_w_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvaddwev_h_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvsubwev_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsubwev_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsubwev_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsubwev_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsubwev_q_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvsubwev_d_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvsubwev_w_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvsubwev_h_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmulwev_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmulwev_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmulwev_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmulwev_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmulwev_q_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmulwev_d_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmulwev_w_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmulwev_h_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvaddwod_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvaddwod_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvaddwod_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvaddwod_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvaddwod_q_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvaddwod_d_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvaddwod_w_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvaddwod_h_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvsubwod_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsubwod_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvsubwod_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvsubwod_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvsubwod_q_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvsubwod_d_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvsubwod_w_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvsubwod_h_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmulwod_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvmulwod_d_w, LARCH_V4DI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvmulwod_w_h, LARCH_V8SI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvmulwod_h_b, LARCH_V16HI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvmulwod_q_du, LARCH_V4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmulwod_d_wu, LARCH_V4DI_FTYPE_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmulwod_w_hu, LARCH_V8SI_FTYPE_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmulwod_h_bu, LARCH_V16HI_FTYPE_UV32QI_UV32QI),
+  LASX_BUILTIN (xvaddwev_d_wu_w, LARCH_V4DI_FTYPE_UV8SI_V8SI),
+  LASX_BUILTIN (xvaddwev_w_hu_h, LARCH_V8SI_FTYPE_UV16HI_V16HI),
+  LASX_BUILTIN (xvaddwev_h_bu_b, LARCH_V16HI_FTYPE_UV32QI_V32QI),
+  LASX_BUILTIN (xvmulwev_d_wu_w, LARCH_V4DI_FTYPE_UV8SI_V8SI),
+  LASX_BUILTIN (xvmulwev_w_hu_h, LARCH_V8SI_FTYPE_UV16HI_V16HI),
+  LASX_BUILTIN (xvmulwev_h_bu_b, LARCH_V16HI_FTYPE_UV32QI_V32QI),
+  LASX_BUILTIN (xvaddwod_d_wu_w, LARCH_V4DI_FTYPE_UV8SI_V8SI),
+  LASX_BUILTIN (xvaddwod_w_hu_h, LARCH_V8SI_FTYPE_UV16HI_V16HI),
+  LASX_BUILTIN (xvaddwod_h_bu_b, LARCH_V16HI_FTYPE_UV32QI_V32QI),
+  LASX_BUILTIN (xvmulwod_d_wu_w, LARCH_V4DI_FTYPE_UV8SI_V8SI),
+  LASX_BUILTIN (xvmulwod_w_hu_h, LARCH_V8SI_FTYPE_UV16HI_V16HI),
+  LASX_BUILTIN (xvmulwod_h_bu_b, LARCH_V16HI_FTYPE_UV32QI_V32QI),
+  LASX_BUILTIN (xvhaddw_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvhaddw_qu_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvhsubw_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvhsubw_qu_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmaddwev_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI_V4DI),
+  LASX_BUILTIN (xvmaddwev_d_w, LARCH_V4DI_FTYPE_V4DI_V8SI_V8SI),
+  LASX_BUILTIN (xvmaddwev_w_h, LARCH_V8SI_FTYPE_V8SI_V16HI_V16HI),
+  LASX_BUILTIN (xvmaddwev_h_b, LARCH_V16HI_FTYPE_V16HI_V32QI_V32QI),
+  LASX_BUILTIN (xvmaddwev_q_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmaddwev_d_wu, LARCH_UV4DI_FTYPE_UV4DI_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmaddwev_w_hu, LARCH_UV8SI_FTYPE_UV8SI_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmaddwev_h_bu, LARCH_UV16HI_FTYPE_UV16HI_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmaddwod_q_d, LARCH_V4DI_FTYPE_V4DI_V4DI_V4DI),
+  LASX_BUILTIN (xvmaddwod_d_w, LARCH_V4DI_FTYPE_V4DI_V8SI_V8SI),
+  LASX_BUILTIN (xvmaddwod_w_h, LARCH_V8SI_FTYPE_V8SI_V16HI_V16HI),
+  LASX_BUILTIN (xvmaddwod_h_b, LARCH_V16HI_FTYPE_V16HI_V32QI_V32QI),
+  LASX_BUILTIN (xvmaddwod_q_du, LARCH_UV4DI_FTYPE_UV4DI_UV4DI_UV4DI),
+  LASX_BUILTIN (xvmaddwod_d_wu, LARCH_UV4DI_FTYPE_UV4DI_UV8SI_UV8SI),
+  LASX_BUILTIN (xvmaddwod_w_hu, LARCH_UV8SI_FTYPE_UV8SI_UV16HI_UV16HI),
+  LASX_BUILTIN (xvmaddwod_h_bu, LARCH_UV16HI_FTYPE_UV16HI_UV32QI_UV32QI),
+  LASX_BUILTIN (xvmaddwev_q_du_d, LARCH_V4DI_FTYPE_V4DI_UV4DI_V4DI),
+  LASX_BUILTIN (xvmaddwev_d_wu_w, LARCH_V4DI_FTYPE_V4DI_UV8SI_V8SI),
+  LASX_BUILTIN (xvmaddwev_w_hu_h, LARCH_V8SI_FTYPE_V8SI_UV16HI_V16HI),
+  LASX_BUILTIN (xvmaddwev_h_bu_b, LARCH_V16HI_FTYPE_V16HI_UV32QI_V32QI),
+  LASX_BUILTIN (xvmaddwod_q_du_d, LARCH_V4DI_FTYPE_V4DI_UV4DI_V4DI),
+  LASX_BUILTIN (xvmaddwod_d_wu_w, LARCH_V4DI_FTYPE_V4DI_UV8SI_V8SI),
+  LASX_BUILTIN (xvmaddwod_w_hu_h, LARCH_V8SI_FTYPE_V8SI_UV16HI_V16HI),
+  LASX_BUILTIN (xvmaddwod_h_bu_b, LARCH_V16HI_FTYPE_V16HI_UV32QI_V32QI),
+  LASX_BUILTIN (xvrotr_b, LARCH_V32QI_FTYPE_V32QI_V32QI),
+  LASX_BUILTIN (xvrotr_h, LARCH_V16HI_FTYPE_V16HI_V16HI),
+  LASX_BUILTIN (xvrotr_w, LARCH_V8SI_FTYPE_V8SI_V8SI),
+  LASX_BUILTIN (xvrotr_d, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvadd_q, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvsub_q, LARCH_V4DI_FTYPE_V4DI_V4DI),
+  LASX_BUILTIN (xvaddwev_q_du_d, LARCH_V4DI_FTYPE_UV4DI_V4DI),
+  LASX_BUILTIN (xvaddwod_q_du_d, LARCH_V4DI_FTYPE_UV4DI_V4DI),
+  LASX_BUILTIN (xvmulwev_q_du_d, LARCH_V4DI_FTYPE_UV4DI_V4DI),
+  LASX_BUILTIN (xvmulwod_q_du_d, LARCH_V4DI_FTYPE_UV4DI_V4DI),
+  LASX_BUILTIN (xvmskgez_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvmsknz_b, LARCH_V32QI_FTYPE_V32QI),
+  LASX_BUILTIN (xvexth_h_b, LARCH_V16HI_FTYPE_V32QI),
+  LASX_BUILTIN (xvexth_w_h, LARCH_V8SI_FTYPE_V16HI),
+  LASX_BUILTIN (xvexth_d_w, LARCH_V4DI_FTYPE_V8SI),
+  LASX_BUILTIN (xvexth_q_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvexth_hu_bu, LARCH_UV16HI_FTYPE_UV32QI),
+  LASX_BUILTIN (xvexth_wu_hu, LARCH_UV8SI_FTYPE_UV16HI),
+  LASX_BUILTIN (xvexth_du_wu, LARCH_UV4DI_FTYPE_UV8SI),
+  LASX_BUILTIN (xvexth_qu_du, LARCH_UV4DI_FTYPE_UV4DI),
+  LASX_BUILTIN (xvrotri_b, LARCH_V32QI_FTYPE_V32QI_UQI),
+  LASX_BUILTIN (xvrotri_h, LARCH_V16HI_FTYPE_V16HI_UQI),
+  LASX_BUILTIN (xvrotri_w, LARCH_V8SI_FTYPE_V8SI_UQI),
+  LASX_BUILTIN (xvrotri_d, LARCH_V4DI_FTYPE_V4DI_UQI),
+  LASX_BUILTIN (xvextl_q_d, LARCH_V4DI_FTYPE_V4DI),
+  LASX_BUILTIN (xvsrlni_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvsrlni_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvsrlni_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvsrlni_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvsrlrni_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvsrlrni_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvsrlrni_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvsrlrni_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrlni_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrlni_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrlni_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrlni_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrlni_bu_h, LARCH_UV32QI_FTYPE_UV32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrlni_hu_w, LARCH_UV16HI_FTYPE_UV16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrlni_wu_d, LARCH_UV8SI_FTYPE_UV8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrlni_du_q, LARCH_UV4DI_FTYPE_UV4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrlrni_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrlrni_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrlrni_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrlrni_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrlrni_bu_h, LARCH_UV32QI_FTYPE_UV32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrlrni_hu_w, LARCH_UV16HI_FTYPE_UV16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrlrni_wu_d, LARCH_UV8SI_FTYPE_UV8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrlrni_du_q, LARCH_UV4DI_FTYPE_UV4DI_V4DI_USI),
+  LASX_BUILTIN (xvsrani_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvsrani_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvsrani_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvsrani_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvsrarni_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvsrarni_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvsrarni_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvsrarni_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrani_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrani_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrani_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrani_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrani_bu_h, LARCH_UV32QI_FTYPE_UV32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrani_hu_w, LARCH_UV16HI_FTYPE_UV16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrani_wu_d, LARCH_UV8SI_FTYPE_UV8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrani_du_q, LARCH_UV4DI_FTYPE_UV4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrarni_b_h, LARCH_V32QI_FTYPE_V32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrarni_h_w, LARCH_V16HI_FTYPE_V16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrarni_w_d, LARCH_V8SI_FTYPE_V8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrarni_d_q, LARCH_V4DI_FTYPE_V4DI_V4DI_USI),
+  LASX_BUILTIN (xvssrarni_bu_h, LARCH_UV32QI_FTYPE_UV32QI_V32QI_USI),
+  LASX_BUILTIN (xvssrarni_hu_w, LARCH_UV16HI_FTYPE_UV16HI_V16HI_USI),
+  LASX_BUILTIN (xvssrarni_wu_d, LARCH_UV8SI_FTYPE_UV8SI_V8SI_USI),
+  LASX_BUILTIN (xvssrarni_du_q, LARCH_UV4DI_FTYPE_UV4DI_V4DI_USI)
 };
 
 /* Index I is the function declaration for loongarch_builtins[I], or null if
@@ -1441,11 +2497,15 @@ loongarch_builtin_vectorized_function (unsigned int fn, tree type_out,
     {
       if (out_n == 2 && in_n == 2)
 	return LARCH_GET_BUILTIN (lsx_vfrintrp_d);
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lasx_xvfrintrp_d);
     }
       if (out_mode == SFmode && in_mode == SFmode)
     {
       if (out_n == 4 && in_n == 4)
 	return LARCH_GET_BUILTIN (lsx_vfrintrp_s);
+      if (out_n == 8 && in_n == 8)
+	return LARCH_GET_BUILTIN (lasx_xvfrintrp_s);
     }
       break;
 
@@ -1454,11 +2514,15 @@ loongarch_builtin_vectorized_function (unsigned int fn, tree type_out,
     {
       if (out_n == 2 && in_n == 2)
 	return LARCH_GET_BUILTIN (lsx_vfrintrz_d);
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lasx_xvfrintrz_d);
     }
       if (out_mode == SFmode && in_mode == SFmode)
     {
       if (out_n == 4 && in_n == 4)
 	return LARCH_GET_BUILTIN (lsx_vfrintrz_s);
+      if (out_n == 8 && in_n == 8)
+	return LARCH_GET_BUILTIN (lasx_xvfrintrz_s);
     }
       break;
 
@@ -1468,11 +2532,15 @@ loongarch_builtin_vectorized_function (unsigned int fn, tree type_out,
     {
       if (out_n == 2 && in_n == 2)
 	return LARCH_GET_BUILTIN (lsx_vfrint_d);
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lasx_xvfrint_d);
     }
       if (out_mode == SFmode && in_mode == SFmode)
     {
       if (out_n == 4 && in_n == 4)
 	return LARCH_GET_BUILTIN (lsx_vfrint_s);
+      if (out_n == 8 && in_n == 8)
+	return LARCH_GET_BUILTIN (lasx_xvfrint_s);
     }
       break;
 
@@ -1481,11 +2549,15 @@ loongarch_builtin_vectorized_function (unsigned int fn, tree type_out,
     {
       if (out_n == 2 && in_n == 2)
 	return LARCH_GET_BUILTIN (lsx_vfrintrm_d);
+      if (out_n == 4 && in_n == 4)
+	return LARCH_GET_BUILTIN (lasx_xvfrintrm_d);
     }
       if (out_mode == SFmode && in_mode == SFmode)
     {
       if (out_n == 4 && in_n == 4)
 	return LARCH_GET_BUILTIN (lsx_vfrintrm_s);
+      if (out_n == 8 && in_n == 8)
+	return LARCH_GET_BUILTIN (lasx_xvfrintrm_s);
     }
       break;
 
@@ -1560,6 +2632,30 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
     case CODE_FOR_lsx_vsubi_hu:
     case CODE_FOR_lsx_vsubi_wu:
     case CODE_FOR_lsx_vsubi_du:
+    case CODE_FOR_lasx_xvaddi_bu:
+    case CODE_FOR_lasx_xvaddi_hu:
+    case CODE_FOR_lasx_xvaddi_wu:
+    case CODE_FOR_lasx_xvaddi_du:
+    case CODE_FOR_lasx_xvslti_bu:
+    case CODE_FOR_lasx_xvslti_hu:
+    case CODE_FOR_lasx_xvslti_wu:
+    case CODE_FOR_lasx_xvslti_du:
+    case CODE_FOR_lasx_xvslei_bu:
+    case CODE_FOR_lasx_xvslei_hu:
+    case CODE_FOR_lasx_xvslei_wu:
+    case CODE_FOR_lasx_xvslei_du:
+    case CODE_FOR_lasx_xvmaxi_bu:
+    case CODE_FOR_lasx_xvmaxi_hu:
+    case CODE_FOR_lasx_xvmaxi_wu:
+    case CODE_FOR_lasx_xvmaxi_du:
+    case CODE_FOR_lasx_xvmini_bu:
+    case CODE_FOR_lasx_xvmini_hu:
+    case CODE_FOR_lasx_xvmini_wu:
+    case CODE_FOR_lasx_xvmini_du:
+    case CODE_FOR_lasx_xvsubi_bu:
+    case CODE_FOR_lasx_xvsubi_hu:
+    case CODE_FOR_lasx_xvsubi_wu:
+    case CODE_FOR_lasx_xvsubi_du:
       gcc_assert (has_target_p && nops == 3);
       /* We only generate a vector of constants iff the second argument
 	 is an immediate.  We also validate the range of the immediate.  */
@@ -1598,6 +2694,26 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
     case CODE_FOR_lsx_vmini_h:
     case CODE_FOR_lsx_vmini_w:
     case CODE_FOR_lsx_vmini_d:
+    case CODE_FOR_lasx_xvseqi_b:
+    case CODE_FOR_lasx_xvseqi_h:
+    case CODE_FOR_lasx_xvseqi_w:
+    case CODE_FOR_lasx_xvseqi_d:
+    case CODE_FOR_lasx_xvslti_b:
+    case CODE_FOR_lasx_xvslti_h:
+    case CODE_FOR_lasx_xvslti_w:
+    case CODE_FOR_lasx_xvslti_d:
+    case CODE_FOR_lasx_xvslei_b:
+    case CODE_FOR_lasx_xvslei_h:
+    case CODE_FOR_lasx_xvslei_w:
+    case CODE_FOR_lasx_xvslei_d:
+    case CODE_FOR_lasx_xvmaxi_b:
+    case CODE_FOR_lasx_xvmaxi_h:
+    case CODE_FOR_lasx_xvmaxi_w:
+    case CODE_FOR_lasx_xvmaxi_d:
+    case CODE_FOR_lasx_xvmini_b:
+    case CODE_FOR_lasx_xvmini_h:
+    case CODE_FOR_lasx_xvmini_w:
+    case CODE_FOR_lasx_xvmini_d:
       gcc_assert (has_target_p && nops == 3);
       /* We only generate a vector of constants iff the second argument
 	 is an immediate.  We also validate the range of the immediate.  */
@@ -1620,6 +2736,10 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
     case CODE_FOR_lsx_vori_b:
     case CODE_FOR_lsx_vnori_b:
     case CODE_FOR_lsx_vxori_b:
+    case CODE_FOR_lasx_xvandi_b:
+    case CODE_FOR_lasx_xvori_b:
+    case CODE_FOR_lasx_xvnori_b:
+    case CODE_FOR_lasx_xvxori_b:
       gcc_assert (has_target_p && nops == 3);
       if (!CONST_INT_P (ops[2].value))
 	break;
@@ -1629,6 +2749,7 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
       break;
 
     case CODE_FOR_lsx_vbitseli_b:
+    case CODE_FOR_lasx_xvbitseli_b:
       gcc_assert (has_target_p && nops == 4);
       if (!CONST_INT_P (ops[3].value))
 	break;
@@ -1641,6 +2762,10 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
     case CODE_FOR_lsx_vreplgr2vr_h:
     case CODE_FOR_lsx_vreplgr2vr_w:
     case CODE_FOR_lsx_vreplgr2vr_d:
+    case CODE_FOR_lasx_xvreplgr2vr_b:
+    case CODE_FOR_lasx_xvreplgr2vr_h:
+    case CODE_FOR_lasx_xvreplgr2vr_w:
+    case CODE_FOR_lasx_xvreplgr2vr_d:
       /* Map the built-ins to vector fill operations.  We need fix up the mode
 	 for the element being inserted.  */
       gcc_assert (has_target_p && nops == 2);
@@ -1669,6 +2794,26 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
     case CODE_FOR_lsx_vpickod_b:
     case CODE_FOR_lsx_vpickod_h:
     case CODE_FOR_lsx_vpickod_w:
+    case CODE_FOR_lasx_xvilvh_b:
+    case CODE_FOR_lasx_xvilvh_h:
+    case CODE_FOR_lasx_xvilvh_w:
+    case CODE_FOR_lasx_xvilvh_d:
+    case CODE_FOR_lasx_xvilvl_b:
+    case CODE_FOR_lasx_xvilvl_h:
+    case CODE_FOR_lasx_xvilvl_w:
+    case CODE_FOR_lasx_xvilvl_d:
+    case CODE_FOR_lasx_xvpackev_b:
+    case CODE_FOR_lasx_xvpackev_h:
+    case CODE_FOR_lasx_xvpackev_w:
+    case CODE_FOR_lasx_xvpackod_b:
+    case CODE_FOR_lasx_xvpackod_h:
+    case CODE_FOR_lasx_xvpackod_w:
+    case CODE_FOR_lasx_xvpickev_b:
+    case CODE_FOR_lasx_xvpickev_h:
+    case CODE_FOR_lasx_xvpickev_w:
+    case CODE_FOR_lasx_xvpickod_b:
+    case CODE_FOR_lasx_xvpickod_h:
+    case CODE_FOR_lasx_xvpickod_w:
       /* Swap the operands 1 and 2 for interleave operations.  Built-ins follow
 	 convention of ISA, which have op1 as higher component and op2 as lower
 	 component.  However, the VEC_PERM op in tree and vec_concat in RTL
@@ -1690,6 +2835,18 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
     case CODE_FOR_lsx_vsrli_h:
     case CODE_FOR_lsx_vsrli_w:
     case CODE_FOR_lsx_vsrli_d:
+    case CODE_FOR_lasx_xvslli_b:
+    case CODE_FOR_lasx_xvslli_h:
+    case CODE_FOR_lasx_xvslli_w:
+    case CODE_FOR_lasx_xvslli_d:
+    case CODE_FOR_lasx_xvsrai_b:
+    case CODE_FOR_lasx_xvsrai_h:
+    case CODE_FOR_lasx_xvsrai_w:
+    case CODE_FOR_lasx_xvsrai_d:
+    case CODE_FOR_lasx_xvsrli_b:
+    case CODE_FOR_lasx_xvsrli_h:
+    case CODE_FOR_lasx_xvsrli_w:
+    case CODE_FOR_lasx_xvsrli_d:
       gcc_assert (has_target_p && nops == 3);
       if (CONST_INT_P (ops[2].value))
 	{
@@ -1750,6 +2907,25 @@ loongarch_expand_builtin_insn (enum insn_code icode, unsigned int nops,
 							     INTVAL (ops[2].value));
       break;
 
+    case CODE_FOR_lasx_xvinsgr2vr_w:
+    case CODE_FOR_lasx_xvinsgr2vr_d:
+      /* Map the built-ins to insert operations.  We need to swap operands,
+	 fix up the mode for the element being inserted, and generate
+	 a bit mask for vec_merge.  */
+      gcc_assert (has_target_p && nops == 4);
+      std::swap (ops[1], ops[2]);
+      imode = GET_MODE_INNER (ops[0].mode);
+      ops[1].value = lowpart_subreg (imode, ops[1].value, ops[1].mode);
+      ops[1].mode = imode;
+      rangelo = 0;
+      rangehi = GET_MODE_NUNITS (ops[0].mode) - 1;
+      if (CONST_INT_P (ops[3].value)
+	  && IN_RANGE (INTVAL (ops[3].value), rangelo, rangehi))
+	ops[3].value = GEN_INT (1 << INTVAL (ops[3].value));
+      else
+	error_opno = 2;
+      break;
+
     default:
       break;
   }
@@ -1859,12 +3035,14 @@ loongarch_expand_builtin (tree exp, rtx target, rtx subtarget ATTRIBUTE_UNUSED,
     {
     case LARCH_BUILTIN_DIRECT:
     case LARCH_BUILTIN_LSX:
+    case LARCH_BUILTIN_LASX:
       return loongarch_expand_builtin_direct (d->icode, target, exp, true);
 
     case LARCH_BUILTIN_DIRECT_NO_TARGET:
       return loongarch_expand_builtin_direct (d->icode, target, exp, false);
 
     case LARCH_BUILTIN_LSX_TEST_BRANCH:
+    case LARCH_BUILTIN_LASX_TEST_BRANCH:
       return loongarch_expand_builtin_lsx_test_branch (d->icode, exp);
     }
   gcc_unreachable ();
diff --git a/gcc/config/loongarch/loongarch-ftypes.def b/gcc/config/loongarch/loongarch-ftypes.def
index 1ce9d83ccab..72d96878038 100644
--- a/gcc/config/loongarch/loongarch-ftypes.def
+++ b/gcc/config/loongarch/loongarch-ftypes.def
@@ -67,6 +67,7 @@ DEF_LARCH_FTYPE (3, (UDI, UDI, UDI, USI))
 DEF_LARCH_FTYPE (1, (DF, DF))
 DEF_LARCH_FTYPE (2, (DF, DF, DF))
 DEF_LARCH_FTYPE (1, (DF, V2DF))
+DEF_LARCH_FTYPE (1, (DF, V4DF))
 
 DEF_LARCH_FTYPE (1, (DI, DI))
 DEF_LARCH_FTYPE (1, (DI, SI))
@@ -83,6 +84,7 @@ DEF_LARCH_FTYPE (2, (DI, SI, SI))
 DEF_LARCH_FTYPE (2, (DI, USI, USI))
 
 DEF_LARCH_FTYPE (2, (DI, V2DI, UQI))
+DEF_LARCH_FTYPE (2, (DI, V4DI, UQI))
 
 DEF_LARCH_FTYPE (2, (INT, DF, DF))
 DEF_LARCH_FTYPE (2, (INT, SF, SF))
@@ -104,21 +106,31 @@ DEF_LARCH_FTYPE (3, (SI, SI, SI, SI))
 DEF_LARCH_FTYPE (3, (SI, SI, SI, QI))
 DEF_LARCH_FTYPE (1, (SI, UQI))
 DEF_LARCH_FTYPE (1, (SI, UV16QI))
+DEF_LARCH_FTYPE (1, (SI, UV32QI))
 DEF_LARCH_FTYPE (1, (SI, UV2DI))
+DEF_LARCH_FTYPE (1, (SI, UV4DI))
 DEF_LARCH_FTYPE (1, (SI, UV4SI))
+DEF_LARCH_FTYPE (1, (SI, UV8SI))
 DEF_LARCH_FTYPE (1, (SI, UV8HI))
+DEF_LARCH_FTYPE (1, (SI, UV16HI))
 DEF_LARCH_FTYPE (2, (SI, V16QI, UQI))
+DEF_LARCH_FTYPE (2, (SI, V32QI, UQI))
 DEF_LARCH_FTYPE (1, (SI, V2HI))
 DEF_LARCH_FTYPE (2, (SI, V2HI, V2HI))
 DEF_LARCH_FTYPE (1, (SI, V4QI))
 DEF_LARCH_FTYPE (2, (SI, V4QI, V4QI))
 DEF_LARCH_FTYPE (2, (SI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (SI, V8SI, UQI))
 DEF_LARCH_FTYPE (2, (SI, V8HI, UQI))
 DEF_LARCH_FTYPE (1, (SI, VOID))
 
 DEF_LARCH_FTYPE (2, (UDI, UDI, UDI))
+DEF_LARCH_FTYPE (2, (USI, V32QI, UQI))
 DEF_LARCH_FTYPE (2, (UDI, UV2SI, UV2SI))
+DEF_LARCH_FTYPE (2, (USI, V8SI, UQI))
 DEF_LARCH_FTYPE (2, (UDI, V2DI, UQI))
+DEF_LARCH_FTYPE (2, (USI, V16HI, UQI))
+DEF_LARCH_FTYPE (2, (UDI, V4DI, UQI))
 
 DEF_LARCH_FTYPE (2, (USI, V16QI, UQI))
 DEF_LARCH_FTYPE (2, (USI, V4SI, UQI))
@@ -142,6 +154,23 @@ DEF_LARCH_FTYPE (2, (UV2DI, UV2DI, V2DI))
 DEF_LARCH_FTYPE (2, (UV2DI, UV4SI, UV4SI))
 DEF_LARCH_FTYPE (1, (UV2DI, V2DF))
 
+DEF_LARCH_FTYPE (2, (UV32QI, UV32QI, UQI))
+DEF_LARCH_FTYPE (2, (UV32QI, UV32QI, USI))
+DEF_LARCH_FTYPE (2, (UV32QI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (3, (UV32QI, UV32QI, UV32QI, UQI))
+DEF_LARCH_FTYPE (3, (UV32QI, UV32QI, UV32QI, USI))
+DEF_LARCH_FTYPE (3, (UV32QI, UV32QI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (2, (UV32QI, UV32QI, V32QI))
+
+DEF_LARCH_FTYPE (2, (UV4DI, UV4DI, UQI))
+DEF_LARCH_FTYPE (2, (UV4DI, UV4DI, UV4DI))
+DEF_LARCH_FTYPE (3, (UV4DI, UV4DI, UV4DI, UQI))
+DEF_LARCH_FTYPE (3, (UV4DI, UV4DI, UV4DI, UV4DI))
+DEF_LARCH_FTYPE (3, (UV4DI, UV4DI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (2, (UV4DI, UV4DI, V4DI))
+DEF_LARCH_FTYPE (2, (UV4DI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (1, (UV4DI, V4DF))
+
 DEF_LARCH_FTYPE (2, (UV2SI, UV2SI, UQI))
 DEF_LARCH_FTYPE (2, (UV2SI, UV2SI, UV2SI))
 
@@ -170,7 +199,22 @@ DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, UV8HI, UQI))
 DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, UV8HI, UV8HI))
 DEF_LARCH_FTYPE (2, (UV8HI, UV8HI, V8HI))
 
-
+DEF_LARCH_FTYPE (2, (UV8SI, UV8SI, UQI))
+DEF_LARCH_FTYPE (2, (UV8SI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (3, (UV8SI, UV8SI, UV8SI, UQI))
+DEF_LARCH_FTYPE (3, (UV8SI, UV8SI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (3, (UV8SI, UV8SI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (2, (UV8SI, UV8SI, V8SI))
+DEF_LARCH_FTYPE (2, (UV8SI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (1, (UV8SI, V8SF))
+
+DEF_LARCH_FTYPE (2, (UV16HI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (2, (UV16HI, UV16HI, UQI))
+DEF_LARCH_FTYPE (3, (UV16HI, UV16HI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (2, (UV16HI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (3, (UV16HI, UV16HI, UV16HI, UQI))
+DEF_LARCH_FTYPE (3, (UV16HI, UV16HI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (2, (UV16HI, UV16HI, V16HI))
 
 DEF_LARCH_FTYPE (2, (UV8QI, UV4HI, UV4HI))
 DEF_LARCH_FTYPE (1, (UV8QI, UV8QI))
@@ -196,6 +240,25 @@ DEF_LARCH_FTYPE (4, (V16QI, V16QI, V16QI, UQI, UQI))
 DEF_LARCH_FTYPE (3, (V16QI, V16QI, V16QI, USI))
 DEF_LARCH_FTYPE (3, (V16QI, V16QI, V16QI, V16QI))
 
+DEF_LARCH_FTYPE (2, (V32QI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (2, (V32QI, CVPOINTER, DI))
+DEF_LARCH_FTYPE (1, (V32QI, HI))
+DEF_LARCH_FTYPE (1, (V32QI, SI))
+DEF_LARCH_FTYPE (2, (V32QI, UV32QI, UQI))
+DEF_LARCH_FTYPE (2, (V32QI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (1, (V32QI, V32QI))
+DEF_LARCH_FTYPE (2, (V32QI, V32QI, QI))
+DEF_LARCH_FTYPE (2, (V32QI, V32QI, SI))
+DEF_LARCH_FTYPE (2, (V32QI, V32QI, UQI))
+DEF_LARCH_FTYPE (2, (V32QI, V32QI, USI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, SI, UQI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, UQI, V32QI))
+DEF_LARCH_FTYPE (2, (V32QI, V32QI, V32QI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, V32QI, SI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, V32QI, UQI))
+DEF_LARCH_FTYPE (4, (V32QI, V32QI, V32QI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, V32QI, USI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, V32QI, V32QI))
 
 DEF_LARCH_FTYPE (1, (V2DF, DF))
 DEF_LARCH_FTYPE (1, (V2DF, UV2DI))
@@ -207,6 +270,16 @@ DEF_LARCH_FTYPE (1, (V2DF, V2DI))
 DEF_LARCH_FTYPE (1, (V2DF, V4SF))
 DEF_LARCH_FTYPE (1, (V2DF, V4SI))
 
+DEF_LARCH_FTYPE (1, (V4DF, DF))
+DEF_LARCH_FTYPE (1, (V4DF, UV4DI))
+DEF_LARCH_FTYPE (1, (V4DF, V4DF))
+DEF_LARCH_FTYPE (2, (V4DF, V4DF, V4DF))
+DEF_LARCH_FTYPE (3, (V4DF, V4DF, V4DF, V4DF))
+DEF_LARCH_FTYPE (2, (V4DF, V4DF, V4DI))
+DEF_LARCH_FTYPE (1, (V4DF, V4DI))
+DEF_LARCH_FTYPE (1, (V4DF, V8SF))
+DEF_LARCH_FTYPE (1, (V4DF, V8SI))
+
 DEF_LARCH_FTYPE (2, (V2DI, CVPOINTER, SI))
 DEF_LARCH_FTYPE (1, (V2DI, DI))
 DEF_LARCH_FTYPE (1, (V2DI, HI))
@@ -233,6 +306,32 @@ DEF_LARCH_FTYPE (3, (V2DI, V2DI, V2DI, V2DI))
 DEF_LARCH_FTYPE (3, (V2DI, V2DI, V4SI, V4SI))
 DEF_LARCH_FTYPE (2, (V2DI, V4SI, V4SI))
 
+DEF_LARCH_FTYPE (2, (V4DI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (1, (V4DI, DI))
+DEF_LARCH_FTYPE (1, (V4DI, HI))
+DEF_LARCH_FTYPE (2, (V4DI, UV4DI, UQI))
+DEF_LARCH_FTYPE (2, (V4DI, UV4DI, UV4DI))
+DEF_LARCH_FTYPE (2, (V4DI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (1, (V4DI, V4DF))
+DEF_LARCH_FTYPE (2, (V4DI, V4DF, V4DF))
+DEF_LARCH_FTYPE (1, (V4DI, V4DI))
+DEF_LARCH_FTYPE (1, (UV4DI, UV4DI))
+DEF_LARCH_FTYPE (2, (V4DI, V4DI, QI))
+DEF_LARCH_FTYPE (2, (V4DI, V4DI, SI))
+DEF_LARCH_FTYPE (2, (V4DI, V4DI, UQI))
+DEF_LARCH_FTYPE (2, (V4DI, V4DI, USI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, DI, UQI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, UQI, V4DI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (2, (V4DI, V4DI, V4DI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, V4DI, SI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, V4DI, USI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, V4DI, UQI))
+DEF_LARCH_FTYPE (4, (V4DI, V4DI, V4DI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, V4DI, V4DI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, V8SI, V8SI))
+DEF_LARCH_FTYPE (2, (V4DI, V8SI, V8SI))
+
 DEF_LARCH_FTYPE (1, (V2HI, SI))
 DEF_LARCH_FTYPE (2, (V2HI, SI, SI))
 DEF_LARCH_FTYPE (3, (V2HI, SI, SI, SI))
@@ -274,6 +373,17 @@ DEF_LARCH_FTYPE (3, (V4SF, V4SF, V4SF, V4SF))
 DEF_LARCH_FTYPE (2, (V4SF, V4SF, V4SI))
 DEF_LARCH_FTYPE (1, (V4SF, V4SI))
 DEF_LARCH_FTYPE (1, (V4SF, V8HI))
+DEF_LARCH_FTYPE (1, (V8SF, V16HI))
+
+DEF_LARCH_FTYPE (1, (V8SF, SF))
+DEF_LARCH_FTYPE (1, (V8SF, UV8SI))
+DEF_LARCH_FTYPE (2, (V8SF, V4DF, V4DF))
+DEF_LARCH_FTYPE (1, (V8SF, V8SF))
+DEF_LARCH_FTYPE (2, (V8SF, V8SF, V8SF))
+DEF_LARCH_FTYPE (3, (V8SF, V8SF, V8SF, V8SF))
+DEF_LARCH_FTYPE (2, (V8SF, V8SF, V8SI))
+DEF_LARCH_FTYPE (1, (V8SF, V8SI))
+DEF_LARCH_FTYPE (1, (V8SF, V8HI))
 
 DEF_LARCH_FTYPE (2, (V4SI, CVPOINTER, SI))
 DEF_LARCH_FTYPE (1, (V4SI, HI))
@@ -282,6 +392,7 @@ DEF_LARCH_FTYPE (2, (V4SI, UV4SI, UQI))
 DEF_LARCH_FTYPE (2, (V4SI, UV4SI, UV4SI))
 DEF_LARCH_FTYPE (2, (V4SI, UV8HI, UV8HI))
 DEF_LARCH_FTYPE (2, (V4SI, V2DF, V2DF))
+DEF_LARCH_FTYPE (2, (V8SI, V4DF, V4DF))
 DEF_LARCH_FTYPE (1, (V4SI, V4SF))
 DEF_LARCH_FTYPE (2, (V4SI, V4SF, V4SF))
 DEF_LARCH_FTYPE (1, (V4SI, V4SI))
@@ -301,6 +412,32 @@ DEF_LARCH_FTYPE (3, (V4SI, V4SI, V4SI, V4SI))
 DEF_LARCH_FTYPE (3, (V4SI, V4SI, V8HI, V8HI))
 DEF_LARCH_FTYPE (2, (V4SI, V8HI, V8HI))
 
+DEF_LARCH_FTYPE (2, (V8SI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (1, (V8SI, HI))
+DEF_LARCH_FTYPE (1, (V8SI, SI))
+DEF_LARCH_FTYPE (2, (V8SI, UV8SI, UQI))
+DEF_LARCH_FTYPE (2, (V8SI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (2, (V8SI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (2, (V8SI, V2DF, V2DF))
+DEF_LARCH_FTYPE (1, (V8SI, V8SF))
+DEF_LARCH_FTYPE (2, (V8SI, V8SF, V8SF))
+DEF_LARCH_FTYPE (1, (V8SI, V8SI))
+DEF_LARCH_FTYPE (2, (V8SI, V8SI, QI))
+DEF_LARCH_FTYPE (2, (V8SI, V8SI, SI))
+DEF_LARCH_FTYPE (2, (V8SI, V8SI, UQI))
+DEF_LARCH_FTYPE (2, (V8SI, V8SI, USI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, SI, UQI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, UQI, V8SI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (2, (V8SI, V8SI, V8SI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, V8SI, SI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, V8SI, UQI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, V8SI, USI))
+DEF_LARCH_FTYPE (4, (V8SI, V8SI, V8SI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, V8SI, V8SI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, V16HI, V16HI))
+DEF_LARCH_FTYPE (2, (V8SI, V16HI, V16HI))
+
 DEF_LARCH_FTYPE (2, (V8HI, CVPOINTER, SI))
 DEF_LARCH_FTYPE (1, (V8HI, HI))
 DEF_LARCH_FTYPE (1, (V8HI, SI))
@@ -326,6 +463,31 @@ DEF_LARCH_FTYPE (4, (V8HI, V8HI, V8HI, UQI, UQI))
 DEF_LARCH_FTYPE (3, (V8HI, V8HI, V8HI, USI))
 DEF_LARCH_FTYPE (3, (V8HI, V8HI, V8HI, V8HI))
 
+DEF_LARCH_FTYPE (2, (V16HI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (1, (V16HI, HI))
+DEF_LARCH_FTYPE (1, (V16HI, SI))
+DEF_LARCH_FTYPE (2, (V16HI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (2, (V16HI, UV16HI, UQI))
+DEF_LARCH_FTYPE (2, (V16HI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (2, (V16HI, V32QI, V32QI))
+DEF_LARCH_FTYPE (2, (V16HI, V8SF, V8SF))
+DEF_LARCH_FTYPE (1, (V16HI, V16HI))
+DEF_LARCH_FTYPE (2, (V16HI, V16HI, QI))
+DEF_LARCH_FTYPE (2, (V16HI, V16HI, SI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, SI, UQI))
+DEF_LARCH_FTYPE (2, (V16HI, V16HI, UQI))
+DEF_LARCH_FTYPE (2, (V16HI, V16HI, USI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, UQI, SI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, UQI, V16HI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, UV32QI, UV32QI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, V32QI, V32QI))
+DEF_LARCH_FTYPE (2, (V16HI, V16HI, V16HI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, V16HI, SI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, V16HI, UQI))
+DEF_LARCH_FTYPE (4, (V16HI, V16HI, V16HI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, V16HI, USI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, V16HI, V16HI))
+
 DEF_LARCH_FTYPE (2, (V8QI, V4HI, V4HI))
 DEF_LARCH_FTYPE (1, (V8QI, V8QI))
 DEF_LARCH_FTYPE (2, (V8QI, V8QI, V8QI))
@@ -337,62 +499,113 @@ DEF_LARCH_FTYPE (2, (VOID, USI, UQI))
 DEF_LARCH_FTYPE (1, (VOID, UHI))
 DEF_LARCH_FTYPE (3, (VOID, V16QI, CVPOINTER, SI))
 DEF_LARCH_FTYPE (3, (VOID, V16QI, CVPOINTER, DI))
+DEF_LARCH_FTYPE (3, (VOID, V32QI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V32QI, CVPOINTER, DI))
+DEF_LARCH_FTYPE (3, (VOID, V4DF, POINTER, SI))
 DEF_LARCH_FTYPE (3, (VOID, V2DF, POINTER, SI))
 DEF_LARCH_FTYPE (3, (VOID, V2DI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V4DI, CVPOINTER, SI))
 DEF_LARCH_FTYPE (2, (VOID, V2HI, V2HI))
 DEF_LARCH_FTYPE (2, (VOID, V4QI, V4QI))
 DEF_LARCH_FTYPE (3, (VOID, V4SF, POINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V8SF, POINTER, SI))
 DEF_LARCH_FTYPE (3, (VOID, V4SI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V8SI, CVPOINTER, SI))
 DEF_LARCH_FTYPE (3, (VOID, V8HI, CVPOINTER, SI))
+DEF_LARCH_FTYPE (3, (VOID, V16HI, CVPOINTER, SI))
 
+DEF_LARCH_FTYPE (1, (V16HI, V32QI))
+DEF_LARCH_FTYPE (1, (UV16HI, UV32QI))
+DEF_LARCH_FTYPE (1, (V8SI, V32QI))
+DEF_LARCH_FTYPE (1, (V4DI, V32QI))
 DEF_LARCH_FTYPE (1, (V8HI, V16QI))
 DEF_LARCH_FTYPE (1, (V4SI, V16QI))
 DEF_LARCH_FTYPE (1, (V2DI, V16QI))
+DEF_LARCH_FTYPE (1, (UV8SI, UV16HI))
+DEF_LARCH_FTYPE (1, (V8SI, V16HI))
+DEF_LARCH_FTYPE (1, (V4DI, V16HI))
 DEF_LARCH_FTYPE (1, (V4SI, V8HI))
 DEF_LARCH_FTYPE (1, (V2DI, V8HI))
 DEF_LARCH_FTYPE (1, (V2DI, V4SI))
+DEF_LARCH_FTYPE (1, (V4DI, V8SI))
+DEF_LARCH_FTYPE (1, (UV4DI, UV8SI))
+DEF_LARCH_FTYPE (1, (UV16HI, V32QI))
+DEF_LARCH_FTYPE (1, (UV8SI, V32QI))
+DEF_LARCH_FTYPE (1, (UV4DI, V32QI))
 DEF_LARCH_FTYPE (1, (UV8HI, V16QI))
 DEF_LARCH_FTYPE (1, (UV4SI, V16QI))
 DEF_LARCH_FTYPE (1, (UV2DI, V16QI))
+DEF_LARCH_FTYPE (1, (UV8SI, V16HI))
+DEF_LARCH_FTYPE (1, (UV4DI, V16HI))
 DEF_LARCH_FTYPE (1, (UV4SI, V8HI))
 DEF_LARCH_FTYPE (1, (UV2DI, V8HI))
 DEF_LARCH_FTYPE (1, (UV2DI, V4SI))
+DEF_LARCH_FTYPE (1, (UV4DI, V8SI))
 DEF_LARCH_FTYPE (1, (UV8HI, UV16QI))
 DEF_LARCH_FTYPE (1, (UV4SI, UV16QI))
 DEF_LARCH_FTYPE (1, (UV2DI, UV16QI))
+DEF_LARCH_FTYPE (1, (UV4DI, UV32QI))
 DEF_LARCH_FTYPE (1, (UV4SI, UV8HI))
 DEF_LARCH_FTYPE (1, (UV2DI, UV8HI))
 DEF_LARCH_FTYPE (1, (UV2DI, UV4SI))
 DEF_LARCH_FTYPE (2, (UV8HI, V16QI, V16QI))
 DEF_LARCH_FTYPE (2, (UV4SI, V8HI, V8HI))
 DEF_LARCH_FTYPE (2, (UV2DI, V4SI, V4SI))
+DEF_LARCH_FTYPE (2, (V16HI, V32QI, UQI))
+DEF_LARCH_FTYPE (2, (V8SI, V16HI, UQI))
+DEF_LARCH_FTYPE (2, (V4DI, V8SI, UQI))
 DEF_LARCH_FTYPE (2, (V8HI, V16QI, UQI))
 DEF_LARCH_FTYPE (2, (V4SI, V8HI, UQI))
 DEF_LARCH_FTYPE (2, (V2DI, V4SI, UQI))
+DEF_LARCH_FTYPE (2, (UV16HI, UV32QI, UQI))
+DEF_LARCH_FTYPE (2, (UV8SI, UV16HI, UQI))
+DEF_LARCH_FTYPE (2, (UV4DI, UV8SI, UQI))
 DEF_LARCH_FTYPE (2, (UV8HI, UV16QI, UQI))
 DEF_LARCH_FTYPE (2, (UV4SI, UV8HI, UQI))
 DEF_LARCH_FTYPE (2, (UV2DI, UV4SI, UQI))
+DEF_LARCH_FTYPE (2, (V32QI, V16HI, V16HI))
+DEF_LARCH_FTYPE (2, (V16HI, V8SI, V8SI))
+DEF_LARCH_FTYPE (2, (V8SI, V4DI, V4DI))
 DEF_LARCH_FTYPE (2, (V16QI, V8HI, V8HI))
 DEF_LARCH_FTYPE (2, (V8HI, V4SI, V4SI))
 DEF_LARCH_FTYPE (2, (V4SI, V2DI, V2DI))
+DEF_LARCH_FTYPE (2, (UV32QI, UV16HI, UV16HI))
+DEF_LARCH_FTYPE (2, (UV16HI, UV8SI, UV8SI))
+DEF_LARCH_FTYPE (2, (UV8SI, UV4DI, UV4DI))
 DEF_LARCH_FTYPE (2, (UV16QI, UV8HI, UV8HI))
 DEF_LARCH_FTYPE (2, (UV8HI, UV4SI, UV4SI))
 DEF_LARCH_FTYPE (2, (UV4SI, UV2DI, UV2DI))
+DEF_LARCH_FTYPE (2, (V32QI, V16HI, UQI))
+DEF_LARCH_FTYPE (2, (V16HI, V8SI, UQI))
+DEF_LARCH_FTYPE (2, (V8SI, V4DI, UQI))
 DEF_LARCH_FTYPE (2, (V16QI, V8HI, UQI))
 DEF_LARCH_FTYPE (2, (V8HI, V4SI, UQI))
 DEF_LARCH_FTYPE (2, (V4SI, V2DI, UQI))
+DEF_LARCH_FTYPE (2, (UV32QI, UV16HI, UQI))
+DEF_LARCH_FTYPE (2, (UV16HI, UV8SI, UQI))
+DEF_LARCH_FTYPE (2, (UV8SI, UV4DI, UQI))
 DEF_LARCH_FTYPE (2, (UV16QI, UV8HI, UQI))
 DEF_LARCH_FTYPE (2, (UV8HI, UV4SI, UQI))
 DEF_LARCH_FTYPE (2, (UV4SI, UV2DI, UQI))
+DEF_LARCH_FTYPE (2, (V32QI, V32QI, DI))
 DEF_LARCH_FTYPE (2, (V16QI, V16QI, DI))
+DEF_LARCH_FTYPE (2, (V32QI, UQI, UQI))
 DEF_LARCH_FTYPE (2, (V16QI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V32QI, V32QI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V16HI, V16HI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V8SI, V8SI, UQI, UQI))
+DEF_LARCH_FTYPE (3, (V4DI, V4DI, UQI, UQI))
 DEF_LARCH_FTYPE (3, (V16QI, V16QI, UQI, UQI))
 DEF_LARCH_FTYPE (3, (V8HI, V8HI, UQI, UQI))
 DEF_LARCH_FTYPE (3, (V4SI, V4SI, UQI, UQI))
 DEF_LARCH_FTYPE (3, (V2DI, V2DI, UQI, UQI))
+DEF_LARCH_FTYPE (2, (V8SF, V4DI, V4DI))
 DEF_LARCH_FTYPE (2, (V4SF, V2DI, V2DI))
+DEF_LARCH_FTYPE (1, (V4DI, V8SF))
 DEF_LARCH_FTYPE (1, (V2DI, V4SF))
+DEF_LARCH_FTYPE (2, (V4DI, UQI, USI))
 DEF_LARCH_FTYPE (2, (V2DI, UQI, USI))
+DEF_LARCH_FTYPE (2, (V4DI, UQI, UQI))
 DEF_LARCH_FTYPE (2, (V2DI, UQI, UQI))
 DEF_LARCH_FTYPE (4, (VOID, SI, UQI, V16QI, CVPOINTER))
 DEF_LARCH_FTYPE (4, (VOID, SI, UQI, V8HI, CVPOINTER))
@@ -402,6 +615,17 @@ DEF_LARCH_FTYPE (2, (V16QI, SI, CVPOINTER))
 DEF_LARCH_FTYPE (2, (V8HI, SI, CVPOINTER))
 DEF_LARCH_FTYPE (2, (V4SI, SI, CVPOINTER))
 DEF_LARCH_FTYPE (2, (V2DI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, V32QI, UQI, SI,  CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, V16HI, UQI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, V8SI, UQI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (4, (VOID, V4DI, UQI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (3, (VOID, V32QI, SI,  CVPOINTER))
+DEF_LARCH_FTYPE (2, (V32QI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V16HI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V8SI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (2, (V4DI, SI, CVPOINTER))
+DEF_LARCH_FTYPE (1, (V32QI, POINTER))
+DEF_LARCH_FTYPE (2, (VOID, V32QI, POINTER))
 DEF_LARCH_FTYPE (2, (V8HI, UV16QI, V16QI))
 DEF_LARCH_FTYPE (2, (V16QI, V16QI, UV16QI))
 DEF_LARCH_FTYPE (2, (UV16QI, V16QI, UV16QI))
@@ -431,6 +655,33 @@ DEF_LARCH_FTYPE (3, (V4SI, V4SI, V16QI, V16QI))
 DEF_LARCH_FTYPE (3, (V4SI, V4SI, UV16QI, V16QI))
 DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, UV16QI, UV16QI))
 
+
+DEF_LARCH_FTYPE(2,(V4DI,V16HI,V16HI))
+DEF_LARCH_FTYPE(2,(V4DI,UV4SI,V4SI))
+DEF_LARCH_FTYPE(2,(V8SI,UV16HI,V16HI))
+DEF_LARCH_FTYPE(2,(V16HI,UV32QI,V32QI))
+DEF_LARCH_FTYPE(2,(V4DI,UV8SI,V8SI))
+DEF_LARCH_FTYPE(3,(V4DI,V4DI,V16HI,V16HI))
+DEF_LARCH_FTYPE(2,(UV32QI,V32QI,UV32QI))
+DEF_LARCH_FTYPE(2,(UV16HI,V16HI,UV16HI))
+DEF_LARCH_FTYPE(2,(UV8SI,V8SI,UV8SI))
+DEF_LARCH_FTYPE(2,(UV4DI,V4DI,UV4DI))
+DEF_LARCH_FTYPE(3,(V4DI,V4DI,UV4DI,V4DI))
+DEF_LARCH_FTYPE(3,(V4DI,V4DI,UV8SI,V8SI))
+DEF_LARCH_FTYPE(3,(V8SI,V8SI,UV16HI,V16HI))
+DEF_LARCH_FTYPE(3,(V16HI,V16HI,UV32QI,V32QI))
+DEF_LARCH_FTYPE(2,(V4DI,UV4DI,V4DI))
+DEF_LARCH_FTYPE(2,(V8SI,V32QI,V32QI))
+DEF_LARCH_FTYPE(2,(UV4DI,UV16HI,UV16HI))
+DEF_LARCH_FTYPE(2,(V4DI,UV16HI,V16HI))
+DEF_LARCH_FTYPE(3,(V8SI,V8SI,V32QI,V32QI))
+DEF_LARCH_FTYPE(3,(UV8SI,UV8SI,UV32QI,UV32QI))
+DEF_LARCH_FTYPE(3,(UV4DI,UV4DI,UV16HI,UV16HI))
+DEF_LARCH_FTYPE(3,(V8SI,V8SI,UV32QI,V32QI))
+DEF_LARCH_FTYPE(3,(V4DI,V4DI,UV16HI,V16HI))
+DEF_LARCH_FTYPE(2,(UV8SI,UV32QI,UV32QI))
+DEF_LARCH_FTYPE(2,(V8SI,UV32QI,V32QI))
+
 DEF_LARCH_FTYPE(4,(VOID,V16QI,CVPOINTER,SI,UQI))
 DEF_LARCH_FTYPE(4,(VOID,V8HI,CVPOINTER,SI,UQI))
 DEF_LARCH_FTYPE(4,(VOID,V4SI,CVPOINTER,SI,UQI))
@@ -448,11 +699,29 @@ DEF_LARCH_FTYPE (3, (UV8HI, UV8HI, V8HI, USI))
 DEF_LARCH_FTYPE (3, (UV4SI, UV4SI, V4SI, USI))
 DEF_LARCH_FTYPE (3, (UV2DI, UV2DI, V2DI, USI))
 
+DEF_LARCH_FTYPE (2, (DI, V8SI, UQI))
+DEF_LARCH_FTYPE (2, (UDI, V8SI, UQI))
+
+DEF_LARCH_FTYPE (3, (UV32QI, UV32QI, V32QI, USI))
+DEF_LARCH_FTYPE (3, (UV16HI, UV16HI, V16HI, USI))
+DEF_LARCH_FTYPE (3, (UV8SI, UV8SI, V8SI, USI))
+DEF_LARCH_FTYPE (3, (UV4DI, UV4DI, V4DI, USI))
+
+DEF_LARCH_FTYPE(4,(VOID,V32QI,CVPOINTER,SI,UQI))
+DEF_LARCH_FTYPE(4,(VOID,V16HI,CVPOINTER,SI,UQI))
+DEF_LARCH_FTYPE(4,(VOID,V8SI,CVPOINTER,SI,UQI))
+DEF_LARCH_FTYPE(4,(VOID,V4DI,CVPOINTER,SI,UQI))
+
 DEF_LARCH_FTYPE (1, (BOOLEAN,V16QI))
 DEF_LARCH_FTYPE(2,(V16QI,CVPOINTER,CVPOINTER))
 DEF_LARCH_FTYPE(3,(VOID,V16QI,CVPOINTER,CVPOINTER))
+DEF_LARCH_FTYPE(2,(V32QI,CVPOINTER,CVPOINTER))
+DEF_LARCH_FTYPE(3,(VOID,V32QI,CVPOINTER,CVPOINTER))
 
 DEF_LARCH_FTYPE (3, (V16QI, V16QI, SI, UQI))
 DEF_LARCH_FTYPE (3, (V2DI, V2DI, SI, UQI))
 DEF_LARCH_FTYPE (3, (V2DI, V2DI, DI, UQI))
 DEF_LARCH_FTYPE (3, (V4SI, V4SI, SI, UQI))
+
+DEF_LARCH_FTYPE (2, (V8SF, V8SF, UQI))
+DEF_LARCH_FTYPE (2, (V4DF, V4DF, UQI))
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 7/8] LoongArch: Add Loongson SX directive test cases.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
                   ` (5 preceding siblings ...)
  2023-07-18 11:06 ` [PATCH v2 6/8] LoongArch: Added Loongson ASX directive builtin function support Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 11:06 ` [PATCH v2 8/8] LoongArch: Add Loongson ASX " Chenghui Pan
  2023-07-18 12:26 ` [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Xi Ruoyao
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/vector/loongarch-vector.exp: New test.
	* gcc.target/loongarch/vector/lsx/lsx-bit-manipulate.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-builtin.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-cmp.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-fp-arith.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-fp-cvt.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-int-arith.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-mem.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-perm.c: New test.
	* gcc.target/loongarch/vector/lsx/lsx-str-manipulate.c: New test.
	* gcc.target/loongarch/vector/simd_correctness_check.h: New test.
---
 .../loongarch/vector/loongarch-vector.exp     |    42 +
 .../loongarch/vector/lsx/lsx-bit-manipulate.c | 15586 +++++++++++
 .../loongarch/vector/lsx/lsx-builtin.c        |  1461 +
 .../gcc.target/loongarch/vector/lsx/lsx-cmp.c |  3354 +++
 .../loongarch/vector/lsx/lsx-fp-arith.c       |  3713 +++
 .../loongarch/vector/lsx/lsx-fp-cvt.c         |  4114 +++
 .../loongarch/vector/lsx/lsx-int-arith.c      | 22424 ++++++++++++++++
 .../gcc.target/loongarch/vector/lsx/lsx-mem.c |   537 +
 .../loongarch/vector/lsx/lsx-perm.c           |  5555 ++++
 .../loongarch/vector/lsx/lsx-str-manipulate.c |   408 +
 .../loongarch/vector/simd_correctness_check.h |    39 +
 11 files changed, 57233 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/loongarch-vector.exp
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-bit-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-builtin.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-cmp.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-cvt.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-int-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-mem.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-perm.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-str-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/simd_correctness_check.h

diff --git a/gcc/testsuite/gcc.target/loongarch/vector/loongarch-vector.exp b/gcc/testsuite/gcc.target/loongarch/vector/loongarch-vector.exp
new file mode 100644
index 00000000000..b0616a26b0e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/loongarch-vector.exp
@@ -0,0 +1,42 @@
+# Copyright (C) 2021-2023 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a LoongArch target.
+if ![istarget loongarch*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS " -mlasx"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/lsx/*.\[cS\]]] \
+	"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/lasx/*.\[cS\]]] \
+	"" $DEFAULT_CFLAGS
+# All done.
+dg-finish
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-bit-manipulate.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-bit-manipulate.c
new file mode 100644
index 00000000000..1413d24baed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-bit-manipulate.c
@@ -0,0 +1,15586 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x001fffff001fffff;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0001ffff9515;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85af0000b000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67eb85af0000b000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x67eb85af0000b000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc8847ef6ed3f2000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0007000000050000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007a8000000480;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000485000004cc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vand_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7e44bde9b842ff23;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00011e80007edff8;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc001fffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffc001fffffffff;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000200010;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f804f804f804f80;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3e035e51522f0799;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3e035e51522f0799;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3e035e51522f0799;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff8000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff8000000000000;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_op0[0]) = 0x81000080806b000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff00011cf0c569;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0000002b0995850;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff9cf0d77b;
+  *((unsigned long*)& __m128i_result[0]) = 0xc1000082b0fb585b;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffbfff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffbfffb;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001ffff0101ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001ffff0001ffff;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffc105d1aa;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffbc19ecca;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffff9bffbfb;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffdffdfb;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000f4012ceb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000f4012ceb;
+  __m128i_out = __lsx_vxor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vxor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vxor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ff0000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x40f3fa0000000000;
+  __m128i_out = __lsx_vxor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000080000068;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000068;
+  __m128i_out = __lsx_vxor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff000001ffff9515;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9514;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff0000ac26;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000000001;
+  __m128i_out = __lsx_vxor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00070007;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0007ffff;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xce23d33e43d9736c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x63b2ac27aa076aeb;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x31dc2cc1bc268c93;
+  *((unsigned long*)& __m128i_result[0]) = 0x9c4d53d855f89514;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff3;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000400080003fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bc2000007e04;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000400080003fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000bc2000007e04;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffbfff7fffc000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff43dfffff81fb;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030298a6a1030a49;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_result[1]) = 0xada4808924882588;
+  *((unsigned long*)& __m128i_result[0]) = 0xacad25090caca5a4;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffe0000ff18;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000000;
+  __m128i_out = __lsx_vnor_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000210011084;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000049000000c0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001ffffff29;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000049000000c0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffff29;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x010f00000111fffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x016700dc0176003a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000000010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x62cbf96e4acfaf40;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf0bc9a5278285a4a;
+  *((unsigned long*)& __m128i_result[1]) = 0x62cbf96e4acfaf40;
+  *((unsigned long*)& __m128i_result[0]) = 0xf0bc9a5278285a4a;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe0004fffe0004;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c7c266e71768fa4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c7c266e71768fa4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010001fffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00d3012b015700bb;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00010000ffca0070;
+  *((unsigned long*)& __m128i_result[1]) = 0xff2cfed4fea8ff44;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffeffff0035ff8f;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fe00fe00fe0045;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe00fe00fe0045;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101000001000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000010000010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe4423f7b769f8ffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00050eb00000fffa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000f8a50000f310;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00bbfff7fffffff7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff008ff820;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffe1ffc0;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff009ff83f;
+  __m128i_out = __lsx_vorn_v(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandi_b(__m128i_op0,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandi_b(__m128i_op0,0x39);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandi_b(__m128i_op0,0x27);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandi_b(__m128i_op0,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vandi_b(__m128i_op0,0xbd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000a95afc60a5c5;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000b6e414157f84;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000204264602444;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000266404046604;
+  __m128i_out = __lsx_vandi_b(__m128i_op0,0x66);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m128i_result[0]) = 0x8282828282828282;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x82);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505853d654185f5;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01010000fefe0101;
+  *((unsigned long*)& __m128i_result[1]) = 0x7545c57d6541c5f5;
+  *((unsigned long*)& __m128i_result[0]) = 0x41414040fefe4141;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x40);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000c2fa8000c2fa;
+  *((unsigned long*)& __m128i_result[1]) = 0x7474f6fd7474fefe;
+  *((unsigned long*)& __m128i_result[0]) = 0xf474f6fef474f6fe;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x74);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m128i_result[0]) = 0x3d3d3d3d3d3d3d3d;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x3d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffadffedbfefe;
+  *((unsigned long*)& __m128i_result[0]) = 0x5f5f7bfedefb5ada;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x5a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0d1202e19235e2bc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xea38e0f75f6e56d1;
+  *((unsigned long*)& __m128i_result[1]) = 0x2f3626e7b637e6be;
+  *((unsigned long*)& __m128i_result[0]) = 0xee3ee6f77f6e76f7;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x26);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_result[1]) = 0xd6d7ded7ded7defe;
+  *((unsigned long*)& __m128i_result[0]) = 0xd6d7ded7ded7defe;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0xd6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe0000fffe0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7777777777777777;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff7777ffff7777;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x77);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x55);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xd454545454545454;
+  *((unsigned long*)& __m128i_result[0]) = 0xd454545454545454;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x54);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f4f4f4f4f4f4f4f;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x4f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8a8a8a8a8a8a8a8a;
+  *((unsigned long*)& __m128i_result[0]) = 0x8a8a8a8a8a8a8a8a;
+  __m128i_out = __lsx_vori_b(__m128i_op0,0x8a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m128i_result[0]) = 0x0404040404040404;
+  __m128i_out = __lsx_vxori_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x5a5a5a5a5b5a5b5a;
+  *((unsigned long*)& __m128i_result[0]) = 0x5a5a5a5a5b5a5b5a;
+  __m128i_out = __lsx_vxori_b(__m128i_op0,0x5a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xe3e3e3e3e3e3e3e3;
+  *((unsigned long*)& __m128i_result[0]) = 0xe3e3e3e3e3e3e3e3;
+  __m128i_out = __lsx_vxori_b(__m128i_op0,0xe3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[1]) = 0x9a9a9a9a9a9a9a9a;
+  *((unsigned long*)& __m128i_result[0]) = 0x9aba9aba9aba9aba;
+  __m128i_out = __lsx_vxori_b(__m128i_op0,0x9a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m128i_result[0]) = 0x9090909090909090;
+  __m128i_out = __lsx_vxori_b(__m128i_op0,0x90);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000b81c8382;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000077af9450;
+  *((unsigned long*)& __m128i_result[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_result[0]) = 0xf1f1f1f1865e65a1;
+  __m128i_out = __lsx_vxori_b(__m128i_op0,0xf1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xcccccccc0000cccc;
+  *((unsigned long*)& __m128i_result[0]) = 0xcccccccc0000cccc;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0x33);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0xa6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3e035e51522f0799;
+  *((unsigned long*)& __m128i_result[1]) = 0x9292929292929292;
+  *((unsigned long*)& __m128i_result[0]) = 0x8090808280909002;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0x6d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000ffc2f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00201df000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3838383838300010;
+  *((unsigned long*)& __m128i_result[0]) = 0x3818200838383838;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0xc7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x5d5d5d5d5d5d5d5d;
+  *((unsigned long*)& __m128i_result[0]) = 0x5d5d5d5d5d5d0000;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0xa2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0x7f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x1313131313131313;
+  *((unsigned long*)& __m128i_result[0]) = 0x1313131313131313;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0xec);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x9d9d9d9d9d9d9d9d;
+  *((unsigned long*)& __m128i_result[0]) = 0x9d9d9d9d9d9d9d9d;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0x62);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00f525682ffd27f2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00365c60317ff930;
+  *((unsigned long*)& __m128i_result[1]) = 0xe500c085c000c005;
+  *((unsigned long*)& __m128i_result[0]) = 0xe5c1a185c48004c5;
+  __m128i_out = __lsx_vnori_b(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xb9884ab93b0b80a0;
+  *((unsigned long*)& __m128i_result[0]) = 0xf11e970c68000000;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100000100010001;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00307028003f80b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0040007fff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffc0ffffff81;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff008000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0060e050007f0160;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040007fff800000;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000401000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000401000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fffffff80000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00003ffd000a4000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fffd000a0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf000800080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000a00028004000;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6b9fe3649c9d6363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363bc9e8b696363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6b9fe3649c9d6363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363bc9e8b696363;
+  *((unsigned long*)& __m128i_result[1]) = 0xb9fe3640e4eb1b18;
+  *((unsigned long*)& __m128i_result[0]) = 0x800000005b4b1b18;
+  __m128i_out = __lsx_vsll_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80001b155b4b0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00006c82;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00009b140000917b;
+  *((unsigned long*)& __m128i_result[1]) = 0x80000000fffffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xb150000000000000;
+  __m128i_out = __lsx_vsll_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff7e00000081;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03f1e3d28b1a8a1a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x03f1e3d28b1a8a1a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x18e2184858682868;
+  __m128i_out = __lsx_vsll_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff02d060;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff02d060;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff02d06000000000;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vsll_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vsll_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000200000001c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000200000001c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000200000001c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000200000001c;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000020000000c0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000020000000c0;
+  __m128i_out = __lsx_vsll_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsll_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xaaaaffebcfb748e0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfd293eab528e7ebe;
+  *((unsigned long*)& __m128i_result[1]) = 0xf6e91c0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x51cfd7c000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffff0ffe04000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffcfffcfffc;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc39fffff007fffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000fe00fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0e7ffffc01fffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000003f803f4;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x3c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff00ffff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0xfcfcfc00fcfc00fc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfcfcfcfcfcfcfc00;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000060;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000f00f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000f00f;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000d46cdc13;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000060000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x61608654a2d4f6da;
+  *((unsigned long*)& __m128i_result[1]) = 0xfee0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc2c00ca844a8ecb4;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff010300ff0103;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf0003000f0003000;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff800fff01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff001ffe02;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd78cfd70b5f65d76;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5779108fdedda7e4;
+  *((unsigned long*)& __m128i_result[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc8847ef6ed3f2000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffff7fffffff7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffff7fffffff7;
+  *((unsigned long*)& __m128i_result[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfcfcfcdcfcfcfcdc;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xc0c0c0c0c0c0c0c0;
+  *((unsigned long*)& __m128i_result[0]) = 0xc0c0c0c0c0c0c0c0;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe2560afe9c001a18;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe2560afe9c001a18;
+  *((unsigned long*)& __m128i_result[1]) = 0x89582bf870006860;
+  *((unsigned long*)& __m128i_result[0]) = 0x89582bf870006860;
+  __m128i_out = __lsx_vslli_w(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x841f000fc28f801f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x107c003c083c007c;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff9727ffff9727;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffe79ffffba5f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff972700000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffba5f00000000;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x20);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x101b0330eb022002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030220020310edc0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080800080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080008000;
+  __m128i_out = __lsx_vslli_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x317fce80317fce80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf0000000f0000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslli_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0177fff0fffffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000011ff8bc;
+  *((unsigned long*)& __m128i_result[1]) = 0x05dfffc3ffffffc0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000047fe2f0;
+  __m128i_out = __lsx_vslli_d(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffefffffffef;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001000f000e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fff1000ffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002a55005501;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002a55000001;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000000fff8fff8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80000000fff80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f800000fff8fff8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f800000fff80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x80000000fff80000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0004000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004000000040000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000750500006541;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000100fffffefd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00f900d7003d00e4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003e00d100de002b;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f4000007f040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f0200007f020000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffe000000f6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x01010101ffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x01010101000000f6;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000049000000c0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001ffffff29;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff7f00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff007f0101017f;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff2900000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000401000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff2900000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc2f9bafac2fac2fa;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101080408040804;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0804080407040804;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000010a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101080408040804;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100810080e081;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4688500046f6a000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f8000004f7fff02;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffffff03ffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00013fff;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffe;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000021ffffffdf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000e60;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0202fe02fd020102;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0400040004000400;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101fe870101fe87;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101fe8700000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x61608654a2d4f6da;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000fb01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000007000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fb01;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000e0000;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff0000000000;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff000100ff00fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff003000ff00a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff000100ff00fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff003000ff00a0;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100010100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffe0000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ff00ff;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffe7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000001fd02;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffe1fffffff;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff7fffffff7f;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff007fff810001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000400530050ffa6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff800fff01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000007ff000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000056f64adb9464;
+  *((unsigned long*)& __m128i_op1[0]) = 0x29ca096f235819c2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000004399d32;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffffffffff;
+  __m128i_out = __lsx_vsrl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001ffff0001ffff;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000020000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000080000;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000017f0a82;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x03ff03ff03ff03ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000400000204010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000020000010200;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000003fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000003fffffff;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x37);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020002000200020;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffefffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffefffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m128i_result[0]) = 0x0007000700070007;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000c000c000c000c;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000003d0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000003d0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000030000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000030000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00fe00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x3d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000100;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000000;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xaa14efac3bb62636;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd6c22c8353a80d2c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000300000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000000010000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000700000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffbffda;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001010101;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x000001fffdfffdff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000001fffdfffdff;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x001a64b345308091;
+  *((unsigned long*)& __m128i_result[0]) = 0x001f2f2cab1c732a;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000290;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000290;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020000ffff0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000003030000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000002345454;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c0dec4ca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000060006;
+  __m128i_out = __lsx_vsrli_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000200000000d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000eefff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf8e1a03affffe3e2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000003e0000003f;
+  __m128i_out = __lsx_vsrli_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrli_d(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1e801ffc7fc00000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ed0008005e00a2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007a007600150077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0007007f03fe0000;
+  __m128i_out = __lsx_vsra_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe001ffffe001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe001ffffe001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fc000003fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fc000003fc00000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fc000003fc00000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fc000003fc00000;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00003ff000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fffc00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd83c8081ffff8080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_result[0]) = 0xd83c8081ffff8080;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe0d56a9774f3ea31;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbddaa86803e33c2a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe0d56a9774f3ea31;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbddaa86803e33c2a;
+  *((unsigned long*)& __m128i_result[1]) = 0xff0600d50e9ef518;
+  *((unsigned long*)& __m128i_result[0]) = 0xffefffa8007c000f;
+  __m128i_out = __lsx_vsra_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xaaaaffebcfb748e0;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfd293eab528e7ebe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffefff6fff80002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_op1[0]) = 0x803f800080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000700ff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000040004000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000700ff00000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000820000ff81;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff810000ff81;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000820000ff81;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff810000ff81;
+  __m128i_out = __lsx_vsra_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x800080007f008000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a0aa9890a0ac5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffff000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3918371635143312;
+  *((unsigned long*)& __m128i_op1[1]) = 0x21201f1e1d001b25;
+  *((unsigned long*)& __m128i_op1[0]) = 0x191817161514131d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001e8e1d8;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000e400000001;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000080008;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000fffe01fd02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000040002;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000c0c00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffac0a000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x801d5de0000559e0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x77eb86788eebafe1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffac00000000;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfcfcfcfcfcfc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_result[0]) = 0x5252525252525252;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3b2c8aefd44be966;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0802080408060803;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001fffe0001fff;
+  __m128i_out = __lsx_vsra_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8000007f800000;
+  __m128i_out = __lsx_vsra_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000047fe2f0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000047fe2f0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fec20704;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000043fe2fc;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000001fffff;
+  __m128i_out = __lsx_vsra_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001ffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ca354688;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000007;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffc0ffff003f;
+  __m128i_out = __lsx_vsrai_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x2e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf6e91c0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x51cfd7c000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffd000700000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0014fff500000000;
+  __m128i_out = __lsx_vsrai_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x3c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3c600000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0f180000ffe00000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21f32eaf5b7a02c8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x407c2ca32cbd0357;
+  *((unsigned long*)& __m128i_result[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_result[0]) = 0x203e16d116de012b;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x01ff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x01ff000000000000;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1268f057137a0267;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0048137ef886fae0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ffffffe2;
+  __m128i_out = __lsx_vsrai_w(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ffffffffff;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe80ffffffffff02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffe80;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001800000039;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000049ffffffaa;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000060000000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000127fffffea;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0aa077b7054c9554;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40c7ee1f38e4c4e8;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vsrai_h(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_w(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fff3fff3fff3fff;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002ebf;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x31);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x31);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_w(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000190;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00f0001000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00f0001000000010;
+  __m128i_out = __lsx_vsrai_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrai_d(__m128i_op0,0x3d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vsrai_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2001240128032403;
+  *((unsigned long*)& __m128i_op1[0]) = 0x288b248c00010401;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffdfffefffff7ffe;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd705c77a7025c899;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x2700000000002727;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000002727;
+  *((unsigned long*)& __m128i_op1[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd705c77a7025c899;
+  *((unsigned long*)& __m128i_result[1]) = 0xc9c00000000009c9;
+  *((unsigned long*)& __m128i_result[0]) = 0x0013938000000000;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000000010000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100100000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x2000000020000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200200000;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x203e16d116de012b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_result[0]) = 0x203e16d116de012b;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x9f009f009f009f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x9f009f009f009f00;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000004fc04f81;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000004fc04f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000004fc04f81;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000004fc04f80;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff000000ff00;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000958affff995d;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000de0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001f0a;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x41dfffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfbffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7bffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfbffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7bffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xf7ffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xf7feffffffffffff;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0ba00ba00ba00ba0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0ba00ba00ba011eb;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf1819b7c0732a6b6;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffb9917a6e7fffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x05d0ba0002e8802e;
+  *((unsigned long*)& __m128i_result[0]) = 0xd005e802174023d6;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000691a6c843c8fc;
+  *((unsigned long*)& __m128i_result[0]) = 0x000691a6918691fc;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000003f0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffc3ffff003e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[1]) = 0xc000000fc0003fff;
+  *((unsigned long*)& __m128i_result[0]) = 0xbffffff0ffffc00f;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_result[1]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m128i_result[0]) = 0xffefffefffefffef;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001010002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010002;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vrotr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4e3e133738bb47d2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x9c7c266e71768fa4;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001a64b345308091;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001f2f2cab1c732a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000014414104505;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1011050040004101;
+  *((unsigned long*)& __m128i_result[1]) = 0x001a323b5430048c;
+  *((unsigned long*)& __m128i_result[0]) = 0x008f792cab1cb915;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001e03;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001a64b345308091;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001f2f2cab1c732a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000780c00000;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00020000ffff0001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000b000b000b000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000b000b000b000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000b000b000b000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000b000b000b000b;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0005840100000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0005847b00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x636363633f3e47c1;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e080f1ef4eaa;
+  *((unsigned long*)& __m128i_result[1]) = 0xa000308000008002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0500847b00000000;
+  __m128i_out = __lsx_vrotr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000000020000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0d1bffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd915e98e2d8df4d1;
+  *((unsigned long*)& __m128i_result[1]) = 0xd0b1ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x9d519ee8d2d84f1d;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x203e16d116de012b;
+  *((unsigned long*)& __m128i_result[1]) = 0x887c8beb969e00f2;
+  *((unsigned long*)& __m128i_result[0]) = 0x101f8b680b6f8095;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dffbfff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0200400000000001;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0800000008000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0800000008000000;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000c00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffff01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffeff400000df4;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff03fe;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe9df0000e81b;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000de00003e14;
+  *((unsigned long*)& __m128i_result[0]) = 0x00012b15ffff32ba;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80001b155b4b0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x80001b155b4b0000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffefffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffefffff;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111113111111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111311111114111;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111311111112111;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0008000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff800000003;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000003f0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffc3ffff003e;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001f80007fff80;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe1ffff801f7f;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff0000ffff0000f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff02d060;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff02d060;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x27b9331b8e77ead9;
+  *((unsigned long*)& __m128i_op0[0]) = 0x58d6bf1867ace738;
+  *((unsigned long*)& __m128i_result[1]) = 0xe4cc6c9edfab6639;
+  *((unsigned long*)& __m128i_result[0]) = 0x5afc6163b39ce19e;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x2c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vrotri_d(__m128i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vrotri_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f7f02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003f803f800100;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000e0000000e0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fc00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000fc00;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000007fff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3b2c8aefd44be966;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007658000115de0;
+  *((unsigned long*)& __m128i_result[0]) = 0x001a8960001d2cc0;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040000000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80ff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff80000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000001fffe;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000005050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0505000005050505;
+  *((unsigned long*)& __m128i_result[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0028280000282800;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffeb48e03eab7ebe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0fac01200f800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f80eac01f80ef80;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffc0000000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000e7e20468;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc2fac2fa53e7db29;
+  *((unsigned long*)& __m128i_result[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a6ffceffb60052;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff00ffffff00;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff208fffffa02;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040004000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010002000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000020000007d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000001f400000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e1d001b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001918000017160;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001514000013120;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff60ca7104649;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff790a15db63d;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffc00ffde4000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe857400fed8f400;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000280000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0014000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f807f807f807f80;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040600000406;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020202020202fe02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020200000202000;
+  *((unsigned long*)& __m128i_result[0]) = 0x002020000fe02000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fef01000e27ca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001fde020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001c4f940000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1c6c80007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fffe00fffffe00;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff800000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ffffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ffffffff00;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000017fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x003fffffff800000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000fffffffe000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000102020204000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000001ffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001030103;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020006000200060;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080805;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080805;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020002000200014;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_result[1]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0313100003131000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000201fe01fc;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000201fe01fc;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39c51f389c0d6112;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39c51f389c0d6112;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001ce28f9c0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000004e06b0890;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x002e0059003b0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000005c000000b2;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000007600000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2e34594c3b000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x017001a002c80260;
+  *((unsigned long*)& __m128i_result[0]) = 0x01d8000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff1affff01001fe0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff1aff6d02834d70;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f800d007f803680;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100418026803800;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x57f160c4a1750eda;
+  *((unsigned long*)& __m128i_result[1]) = 0x000002bf8b062000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffd0ba876d000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0x09e009e009e009e0;
+  *((unsigned long*)& __m128i_result[0]) = 0x09e009e009e009e0;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000090;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000090;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000101fffff8b68;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000b6fffff8095;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000b6fffff8095;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000104000800;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010000fe7c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010000fe01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010000fe01;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000170014;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff0cff78ff96ff14;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff0cff78ff96ff14;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe500ffffc085;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc000ffffc005;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffc000ffffc005;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc8847ef6ed3f2000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3131313131313131;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000d82;
+  *((unsigned long*)& __m128i_op0[0]) = 0x046a09ec009c0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x046a09ec009c0000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f7f02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003f803f800100;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000e0000000e0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fc00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000fc00;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000007fff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3b2c8aefd44be966;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007658000115de0;
+  *((unsigned long*)& __m128i_result[0]) = 0x001a8960001d2cc0;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040000000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80ff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff80000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000001fffe;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000005050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0505000005050505;
+  *((unsigned long*)& __m128i_result[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0028280000282800;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffeb48e03eab7ebe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0fac01200f800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f80eac01f80ef80;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffc0000000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000e7e20468;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc2fac2fa53e7db29;
+  *((unsigned long*)& __m128i_result[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a6ffceffb60052;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff00ffffff00;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff208fffffa02;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040004000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010002000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000020000007d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000001f400000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e1d001b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001918000017160;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001514000013120;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff60ca7104649;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff790a15db63d;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffc00ffde4000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe857400fed8f400;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000280000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0014000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f807f807f807f80;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040600000406;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020202020202fe02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020200000202000;
+  *((unsigned long*)& __m128i_result[0]) = 0x002020000fe02000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fef01000e27ca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001fde020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001c4f940000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1c6c80007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fffe00fffffe00;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff800000000000;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ffffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ffffffff00;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000017fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x003fffffff800000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000fffffffe000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000102020204000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000001ffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001030103;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020006000200060;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080805;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080805;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020002000200014;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_result[1]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0313100003131000;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000201fe01fc;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000201fe01fc;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39c51f389c0d6112;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39c51f389c0d6112;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001ce28f9c0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000004e06b0890;
+  __m128i_out = __lsx_vsllwil_du_wu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsllwil_w_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x002e0059003b0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000005c000000b2;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000007600000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2e34594c3b000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x017001a002c80260;
+  *((unsigned long*)& __m128i_result[0]) = 0x01d8000000000000;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff1affff01001fe0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff1aff6d02834d70;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f800d007f803680;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100418026803800;
+  __m128i_out = __lsx_vsllwil_hu_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x57f160c4a1750eda;
+  *((unsigned long*)& __m128i_result[1]) = 0x000002bf8b062000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffd0ba876d000;
+  __m128i_out = __lsx_vsllwil_d_w(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0x09e009e009e009e0;
+  *((unsigned long*)& __m128i_result[0]) = 0x09e009e009e009e0;
+  __m128i_out = __lsx_vsllwil_h_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000090;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000090;
+  __m128i_out = __lsx_vsllwil_wu_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000101fffff8b68;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000b6fffff8095;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000b6fffff8095;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000104000800;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010000fe7c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010000fe01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010000fe01;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000170014;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff0cff78ff96ff14;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff0cff78ff96ff14;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe500ffffc085;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc000ffffc005;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffc000ffffc005;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc8847ef6ed3f2000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3131313131313131;
+  __m128i_out = __lsx_vextl_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000d82;
+  *((unsigned long*)& __m128i_op0[0]) = 0x046a09ec009c0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x046a09ec009c0000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextl_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x43e092728266beba;
+  *((unsigned long*)& __m128i_op1[0]) = 0x43d8969cc4afbf2d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8000007f800000;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc001fffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000200020002;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffff0ffe04000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000200010;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000101fd01fe;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80ff80ff80ff80;
+  *((unsigned long*)& __m128i_result[0]) = 0xff80ff8080008000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0982e2daf234ed87;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff51cf8da;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffd6040188;
+  *((unsigned long*)& __m128i_result[1]) = 0x00020002000d0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000020f2300ee;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000003fc;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000003fc;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0040000000400000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040000000400000;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0020808100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff208fffffa02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff208fffffa02;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x111110ff11111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000f00f;
+  *((unsigned long*)& __m128i_result[1]) = 0x111110ff11111141;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111113111111100;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f54e0ab00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000003fbf3fbf;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7ff8;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000100;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfeca2eb9931;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00d3007c014e00bd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x06e1000e00030005;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m128i_op0[0]) = 0x363d753d50155c0a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe500c085c000c005;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe5c1a185c48004c5;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002020002020200;
+  *((unsigned long*)& __m128i_result[0]) = 0x021f3b0205150600;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffe000ffdf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffe000ffdf;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffe080f6efc100f7;
+  *((unsigned long*)& __m128i_op1[0]) = 0xefd32176ffe100f7;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000040000000000;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffdfe01;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffdfe0200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4000000000000000;
+  __m128i_out = __lsx_vsrlr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa352bfac9269e0aa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_result[0]) = 0xa352bfac9269e0aa;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000158;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00009c7c00007176;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffeff98;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0014ffe4ff76ffc4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x084d1a0907151a3d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff9fffefff9ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0280000000000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0700f8ff0700f8ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0700f8ff0700f8ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3bc000003a800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000090a00000998;
+  *((unsigned long*)& __m128i_result[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000ef0000000003b;
+  __m128i_out = __lsx_vsrlr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0005847b00011005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0005847b00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000807bf0a1f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000800ecedee68;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005840100000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0005847b00000000;
+  __m128i_out = __lsx_vsrlr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00250023001c001d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x309d2f342a5d2b34;
+  *((unsigned long*)& __m128i_result[1]) = 0x00060eb000000006;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000075c00000cf0;
+  __m128i_out = __lsx_vsrlr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff02ff1bff02ff23;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffff02fff4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001300000013;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00ff00ff00ff00;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000400000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c2bac2c2;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000010000003f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010000003f;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f804f804f804f80;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80010001b57fc565;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8001000184000be0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x80010001b57fc565;
+  *((unsigned long*)& __m128i_result[0]) = 0x8001000184000be0;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0bd80bd80bdfffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0bd80bd80bd80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000958affff995d;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc0fffff000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000bf;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000002bb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc0fffff000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff6080ffff4417;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff6080ffff4417;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fbf3fbf00007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000003a0000003a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000003a0000003a;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0086000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0082000000000007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0086000000040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0082000000000007;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x467f6080467d607f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0037ffc8d7ff2800;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[1]) = 0x001bffe4ebff9400;
+  *((unsigned long*)& __m128i_result[0]) = 0xff80000000000000;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2a29282726252423;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2a29282726252423;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000005452505;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000004442403e4;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100010001000100;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000c0c00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc00000ff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffe4866c86;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffe4866c86;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000002000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000002000000;
+  __m128i_out = __lsx_vsrar_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1748c4f9ed1a5870;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1748c4f9ed1a5870;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x680485c8b304b019;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc89d7f0ff90da019;
+  *((unsigned long*)& __m128i_op1[1]) = 0x680485c8b304b019;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc89d7f0ff90da019;
+  *((unsigned long*)& __m128i_result[1]) = 0x00680486ffffffda;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff913bfffffffd;
+  __m128i_out = __lsx_vsrar_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrar_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005252800052528;
+  *((unsigned long*)& __m128i_result[0]) = 0x0005252800052528;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0200020002000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0200020002000200;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffc001fffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000200000;
+  *((unsigned long*)& __m128i_result[0]) = 0x001fff8004000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00060001fffe8003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000200010;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000078c00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000078c00000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x4000400000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000040004000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001800390049ffaa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0029ff96005cff88;
+  *((unsigned long*)& __m128i_result[1]) = 0x001800390049ffaa;
+  *((unsigned long*)& __m128i_result[0]) = 0x0029ff96005cff88;
+  __m128i_out = __lsx_vsrlri_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03c0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03c0038000000380;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa2a2a2a3a2a2a2a3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc605c000aedd0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000005151515;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000006302e00;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2000200000013fa0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000020000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000020000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x23);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000dc300003ffb;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000dc300003ffb;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808000000035;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200000000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00018d8e00018d8e;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fc03fc000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000003fc00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001fe01fe00;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200020002;
+  __m128i_out = __lsx_vsrlri_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000800080008000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x045340a628404044;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001030103;
+  __m128i_out = __lsx_vsrlri_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlri_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9611c3985b3159f5;
+  *((unsigned long*)& __m128i_result[1]) = 0x0021b761002c593c;
+  *((unsigned long*)& __m128i_result[0]) = 0x002584710016cc56;
+  __m128i_out = __lsx_vsrlri_w(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_result[0]) = 0xbbc8ecc5f3ced5f3;
+  __m128i_out = __lsx_vsrlri_d(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080801030000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080103040000;
+  __m128i_out = __lsx_vsrlri_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000cb4a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000f909;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_d(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff5fff4002ffff5;
+  __m128i_out = __lsx_vsrari_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc0ff81000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff0ffe04000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000f3;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000f3;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fdfc0000fd03;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000017161515;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000095141311;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_d(__m128i_op0,0x34);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e19181716;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000109000000c9;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x77c0404a4000403a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x77c03fd640003fc6;
+  *((unsigned long*)& __m128i_result[1]) = 0x00f0008100800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x00f0008000800080;
+  __m128i_out = __lsx_vsrari_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000006c80031;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_d(__m128i_op0,0x3c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001200100012001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_d(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000404040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb020302101b03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001340134013401;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001340134013401;
+  __m128i_out = __lsx_vsrari_d(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_w(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrari_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000c77c000047cd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000c0f100006549;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffdfff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffdfff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe00001ffe200;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ffffdfff;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff35cab978;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff35cab978;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010035;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x80307028ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8040007fffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0101ff010101;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4180418041804180;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00000000;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00008bf700017052;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000f841000091aa;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe6d4572c8a5835bc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe5017c2ac9ca9fd0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000f8410000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001010001;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000100000001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0ed5ced7e51023e5;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001000e51023e5;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffbfff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010001;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000020002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000020002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000017ffeffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000017ffeffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x379674c000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3789f68000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfefeff00fefeff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfefeff00fefeff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00c0000000800000;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c7c266e71768fa4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000071768fa4;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffdfdc0d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffdfdc0d;
+  __m128i_out = __lsx_vsrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000246d9755;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002427c2ee;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffe0001fffe;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0303020102020001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000000000201;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd82480697f678077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0301020100000004;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff02;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3c5fffffff7fffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffefffeff00feff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000e0180000e810;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000f0080000f800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000e0180000e810;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000f0080000f800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000f0f800;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff00000000;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100089bde;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x80044def00000001;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000100f8100002;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff0ff8006f0f950;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff7a53;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000bf;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000002bb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000021e79364;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000718ea657431b;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfefffffffeffda6f;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfefffffffeffe3d7;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff0000ff86;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101fe870101fe87;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101fe8700000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x353c8cc4b1ec5b09;
+  *((unsigned long*)& __m128i_op1[1]) = 0x002affd600000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcbc2723a4f12a5f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808000000035;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff80ff00ff80ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000000;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff000ff6220c0c1;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffe8081000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff000ff6220c0c1;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffe8081000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xb110606000000000;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0037ffd40083ffe5;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001e0052001ffff9;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00df020f0078007f;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80ffa2fff0ff74;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff76ffd8ffe6ffaa;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffc105d1aa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffbc19ecca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe03ff63ff9bf;
+  __m128i_out = __lsx_vsran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x06d9090909090909;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0039d21e3229d4e8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6d339b4f3b439885;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000db24848;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vsran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1e801ffc7fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003fe00ffe3fe0;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_b_h(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001f;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x7b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc39fffff007fffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000fe00fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x78c00000ff000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x61cf003f0000007f;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000003c607f80;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff7f01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff7f01ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffe03;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffe03;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff8001ffff8001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0x000fffefffefffef;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x4b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363797c63990099;
+  *((unsigned long*)& __m128i_op0[0]) = 0x171f0a1f6376441f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363797c63990099;
+  *((unsigned long*)& __m128i_op1[0]) = 0x171f0a1f6376441f;
+  *((unsigned long*)& __m128i_result[1]) = 0x181e180005021811;
+  *((unsigned long*)& __m128i_result[0]) = 0x181e180005021811;
+  __m128i_out = __lsx_vsrlni_b_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003fff00003fff;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf0fd800080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000a00028004000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000f000800000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x000f000000000000;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xaeaeaeaeaeaeae35;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaeaeaeaeaeaeae35;
+  *((unsigned long*)& __m128i_op1[1]) = 0xaeaeaeaeaeaeae35;
+  *((unsigned long*)& __m128i_op1[0]) = 0xaeaeaeaeaeaeae35;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0x3e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00000000;
+  __m128i_out = __lsx_vsrlni_b_h(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000008140c80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000008140c80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000002050320;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x010101017f010101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040600000406;
+  *((unsigned long*)& __m128i_result[0]) = 0x020202020202fe02;
+  __m128i_out = __lsx_vsrlni_b_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe364525335ede000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000fff00000e36;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0x34);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x601fbfbeffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff8000000000000;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrlni_h_w(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000008;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_w_d(__m128i_op0,__m128i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7c7c000000007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001f1f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000bffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000040001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x6d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe4c8b96e2560afe9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc001a1867fffa207;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe4c8b96e2560afe9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc001a1867fffa207;
+  *((unsigned long*)& __m128i_result[1]) = 0xe2560afe9c001a18;
+  *((unsigned long*)& __m128i_result[0]) = 0xe2560afe9c001a18;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x24);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000042ab41;
+  *((unsigned long*)& __m128i_op0[0]) = 0xb1b1b1b1b16f0670;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000044470000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vsrlni_b_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000080c43b700;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x56);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_op1[0]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_result[1]) = 0x022002101b200203;
+  *((unsigned long*)& __m128i_result[0]) = 0x022002101b200203;
+  __m128i_out = __lsx_vsrlni_d_q(__m128i_op0,__m128i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlni_b_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005000501800005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x438ff81ff81ff820;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03ff03ff03ff03ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000043;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vsrani_d_q(__m128i_op0,__m128i_op1,0x78);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002020202;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_d_q(__m128i_op0,__m128i_op1,0x5b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000009;
+  *((unsigned long*)& __m128i_op1[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd705c77a7025c899;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x03fdfffcfefe03fe;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0100000001000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010001000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00ff00ffffff;
+  __m128i_out = __lsx_vsrani_h_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1e0200001e020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040004000400040;
+  __m128i_out = __lsx_vsrani_w_d(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_d_q(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001ffce00016fb41;
+  *((unsigned long*)& __m128i_op0[0]) = 0x57cb857100001a46;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfbffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7bffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000150000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffeffff001effff;
+  __m128i_out = __lsx_vsrani_h_w(__m128i_op0,__m128i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207f7f;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2020202020207fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x01010101010101ff;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff082f000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vsrani_h_w(__m128i_op0,__m128i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_h_w(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00005dcbe7e830c0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03f21e0114bf19da;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000003f200001e01;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000014bf000019da;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005fe0300010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100010001;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x62cbf96e4acfaf40;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf0bc9a5278285a4a;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x62cbf96e4acfaf40;
+  __m128i_out = __lsx_vsrani_d_q(__m128i_op0,__m128i_op1,0x40);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f54e0ab00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffb6d01f5f94f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001f50000;
+  __m128i_out = __lsx_vsrani_h_w(__m128i_op0,__m128i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_w_d(__m128i_op0,__m128i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x808080e280808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080636380806363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080638063;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000001d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000001d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_d_q(__m128i_op0,__m128i_op1,0x63);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f07697100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000076971000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_w_d(__m128i_op0,__m128i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000003020302;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffff81;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000c0c00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffe;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_d_q(__m128i_op0,__m128i_op1,0x58);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vsrani_w_d(__m128i_op0,__m128i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrani_b_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5847b72626ce61ef;
+  *((unsigned long*)& __m128i_op0[0]) = 0x110053f401e7cced;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5847b72626ce61ef;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005847b00011005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0005847b00000000;
+  __m128i_out = __lsx_vsrani_w_d(__m128i_op0,__m128i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000efffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000383ffff1fff;
+  __m128i_out = __lsx_vsrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000003fc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000003fc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x002affd600000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcbc2723a4f12a5f8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffd60001723aa5f8;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x467f6080467d607f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808081;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xe000e0006080b040;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101030101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101030101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000fffa0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffa0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101000101010001;
+  __m128i_out = __lsx_vsrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80ffffffffff80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff80ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6a5d5b056f2f4978;
+  *((unsigned long*)& __m128i_op1[0]) = 0x17483c07141b5971;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0800010001ff8000;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff01ff01ac025c87;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff01ff01ac465ca1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffefffffffef;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffff1;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefff6fff80002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff000000fefb0000;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000c2fa8000c2fa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc2f9bafac2fac2fa;
+  __m128i_out = __lsx_vsrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff0204;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3918371635143312;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000001d5d4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000150d707009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x03f1e3d28b1a8a1a;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffefffefffeffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffefffefffeffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff7f810100001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001fffc0ffffe001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000002259662;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc4dbe60354005d25;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f01000000f8ff00;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff6ff4ffff8db8;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffbaf4ffffb805;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c7c266e71768fa4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff4ffb800ff0080;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000044470000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00004dce00004700;
+  __m128i_out = __lsx_vsrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0b4c600000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x08080807f5f5f5f8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0202f5f80000ff00;
+  __m128i_out = __lsx_vsrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0d060d060d060d06;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0d060d060d060d06;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0d060d060d060d06;
+  __m128i_out = __lsx_vsrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000011ff040;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff01fe03ff01fe03;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01fe03ff01fe03;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01fe03ff01fe03;
+  __m128i_out = __lsx_vsrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff8969ffffd7e2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000d688ffffbd95;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf12dfafc1ad1f7b3;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x34);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000100;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x2f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000c0002000c0002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000400c600700153;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000c0002000c0002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000400c600700153;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000010000007f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0800000400000800;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001515151500;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001515151500;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001515000015150;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fdfd0404;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fffffff3fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fffffff3fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fc08;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000800080008000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000fc08;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffba420000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000007e044000400;
+  *((unsigned long*)& __m128i_result[0]) = 0xfdd2100000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000081e003f3f3f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f3f3f0e00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000081e003f3f3f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f3f3f0e00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000103c007e7e8;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000103c007e7e8;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x43);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0202022302023212;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0202ff3f02022212;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002100003010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff3f00002010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x79);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe2bb5ff00e20aceb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe2bb5ff00e20aceb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100010000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00e3000e00e3000e;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf58df7841423142a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f7477f8ff4e2152;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3d3e0505101e4008;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2bd5d429e34a1efb;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfc0203fccbedbba7;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc9f66947f077afd0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x89fed7c07fdf5d00;
+  *((unsigned long*)& __m128i_result[1]) = 0x14f1a50ffe65f6de;
+  *((unsigned long*)& __m128i_result[0]) = 0xa3f83bd8e03fefaf;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6ed694e00e0355db;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000010600000106;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xe00e035606000001;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe739e7ade77ae725;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbb9013bd049bc9ec;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x56aca41400000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7ade77ae3bd049bd;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000041400000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1010101010101010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8081808180818081;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000006ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0037f80000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x69);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080808080c04040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101010001808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000202000008081;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001010100010101;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00fff00000001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x6b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000adf0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001e00;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0040000000400040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000020002020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808102;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001010102;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001000100010000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03fc03fc03fc03fc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x04000400ff01ff01;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1010101010101010;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000fff800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000001ed68;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1ff6a09e667f3bd8;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000007b5a;
+  *((unsigned long*)& __m128i_result[0]) = 0x999fcef600000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffe5c8000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x91f80badc162a0c4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x99d1ffff0101ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff400000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x905d0b06cf0008f8;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3802f4fd025800f7;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc8ff0bffff00ffae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x91ff40fffff8ff50;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000200000000700;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000192000001240;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x33);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff0ffd0ffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff0ffc0001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbb7743ca4c78461f;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd9743eb5fb4deb3a;
+  *((unsigned long*)& __m128i_result[1]) = 0x003fffffffc3ff44;
+  *((unsigned long*)& __m128i_result[0]) = 0x002eddd0f2931e12;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x4a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbb7743ca4c78461f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd9743eb5fb4deb3a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x22445e1ad9c3e4f0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1b43e8a30a570a63;
+  *((unsigned long*)& __m128i_result[1]) = 0x743ca4c843eb5fb5;
+  *((unsigned long*)& __m128i_result[0]) = 0x45e1ad9c3e8a30a5;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1204900f62f72565;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4901725600000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x6a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000400000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000300000003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f3f3f7fbf3fffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x47);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000040804080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000020100000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffe8ffff28fc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00007fff0000803e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000006ffff81e1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0ffffffe8ffff290;
+  *((unsigned long*)& __m128i_result[0]) = 0x000007fff0000804;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x44);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000418200000008e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000002100047;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636362;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636362;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636362;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636362;
+  *((unsigned long*)& __m128i_result[1]) = 0x0032003200320032;
+  *((unsigned long*)& __m128i_result[0]) = 0x0032003200320032;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff01010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ffdf87f0b0c7f7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf6b3eb63f6b3f6b3;
+  *((unsigned long*)& __m128i_op1[0]) = 0x363953e42b56432e;
+  *((unsigned long*)& __m128i_result[1]) = 0x010000010080000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x00f700f70036002b;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xed67d6c7ed67ed67;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6c72a7c856ac865c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000700000003;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x3d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff40ff83;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000003030103;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000003030103;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000006060;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000006060;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000002408beb26c8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000706e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000028c27;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000070;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000b0b80000b0b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000101080001010;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffefefffffeff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0061006100020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000fe00fe;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000078087f08;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000078087f08;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000e0fc0000e0fc;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff0bff76;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x75);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x33);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff00ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff00ffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000828282828282;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0008000800000008;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00f7000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000005150;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000005150;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000f7000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x24);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41afddcb1c000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd09e1bd99a2c6eb1;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe82f7c27bb0778af;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000040002;
+  *((unsigned long*)& __m128i_result[0]) = 0x000d000a000f000c;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffdff0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0144329880000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007fffc0007ffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0x004000004c400000;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffafff0fff9ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000d800cff8;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrlrni_h_w(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000002000007d7;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000300000ff1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x000007d700000ff1;
+  __m128i_out = __lsx_vsrlrni_w_d(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000ff8;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001000;
+  __m128i_out = __lsx_vsrlrni_d_q(__m128i_op0,__m128i_op1,0x74);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000f08;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x2020202020202020;
+  __m128i_out = __lsx_vsrlrni_b_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff020000fff4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff020000fff4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fc0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1e801ffc00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080007f80800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x4b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000001e5;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x5000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff8000002f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000f4a8;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00100184017e0032;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0086018c01360164;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffff33c4b1e67;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800c0004300c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_b_h(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000001ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020808100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x29);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x64);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x028c026bfff027af;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000003fc03fc00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffc00a3009b000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ffa7f8ff81;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000003f0080ffc0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000007fff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000a7f87fffff81;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffd400000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000004000000040;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000080003f80ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000001fc00000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff80010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0bd80bd80bdfffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0bd80bd80bd80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1ffffffff8001000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf0bd80bd80bd8000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x24);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xecec006c00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xecec006c00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_b_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001ff85ffdc0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000332ae5d97330;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1ff85ffe2ae5d973;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000043c5ea7b6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000008fc4ef7b4;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000fea0000fffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_b_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x48);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000dfa6e0c6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000d46cdc13;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x64);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x09e8e9012fded7fd;
+  *((unsigned long*)& __m128i_op0[0]) = 0x479f64b03373df61;
+  *((unsigned long*)& __m128i_op1[1]) = 0x04c0044a0400043a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x04c004d6040004c6;
+  *((unsigned long*)& __m128i_result[1]) = 0x1d20db00ec967bec;
+  *((unsigned long*)& __m128i_result[0]) = 0x00890087009b0099;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080000180800001;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000003e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe00fe000200fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe000200fe;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000003e;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefe02fefefe02fe;
+  __m128i_out = __lsx_vsrarni_b_h(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000000010000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0103000201030002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffc000400000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00003fff00010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x6d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff010000ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_result[1]) = 0xf359f359f359f359;
+  *((unsigned long*)& __m128i_result[0]) = 0xf359f359f359f359;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_b_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x01533b5e7489ae24;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffab7e71e33848;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xce9135c49ffff570;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_d_q(__m128i_op0,__m128i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000807bf0a1f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000800ecedee68;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0005840100000005;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0005847b00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001f0a20001cedf;
+  *((unsigned long*)& __m128i_result[0]) = 0x0058000000580000;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffb1fb1000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf2c97aaa7d8fa270;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0b73e427f7cfcb88;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsrarni_w_d(__m128i_op0,__m128i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0a545374471b7070;
+  *((unsigned long*)& __m128i_op0[0]) = 0x274f4f0648145f50;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0xa8a736e19e9e28bf;
+  *((unsigned long*)& __m128i_result[0]) = 0x9e9f9e9f9e9f9e9f;
+  __m128i_out = __lsx_vsrarni_h_w(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808000008080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080000080800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5ff6a0a40ea8f47c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5ff6a0a40e9da42a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003ff000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fffc00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001afffffff7;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000750500006541;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000100fffffefd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff00000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80000000fff6fc00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f0000007f000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080000180800100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff00ffff;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007d3ac600;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffefff6fff80002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101017f0101017f;
+  __m128i_out = __lsx_vssrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00005a5a00005a5a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005b5a00005b5a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x65b780a2ae3bf8ca;
+  *((unsigned long*)& __m128i_op1[0]) = 0x161d0c373c200827;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000001ff;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf8f8e018f8f8e810;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf8f8f008f8f8f800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrln_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff00000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000c7fff000c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000f0009d3c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000016fff9d3d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000c000000060003;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003dbe88077c78c1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003a247fff7fff;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrln_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fbf3fbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7ff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fbf3fbf00007fff;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000fff00000e36;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000fff0e36;
+  __m128i_out = __lsx_vssrln_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffe000ffdf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff7fff;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffc0800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff0018;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000700000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffbffda;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3e25c8317394dae6;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcda585aebbb2836a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ac00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000c6c6c6c6;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c6c6c6c6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m128i_out = __lsx_vssrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssrln_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x64616462b76106dc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x64616462b71d06c2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00c0c000c0000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc0000000c000c000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00c0c000c0000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0000000c000c000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff7fff;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001e001e001e001e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001e001e001e001e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff7fff;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_op0[0]) = 0x59f7fd8759f7fd87;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_op1[0]) = 0x59f7fd8759f7fd87;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff7fff;
+  __m128i_out = __lsx_vssrln_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff00000001;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007fff7fff8000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000007f7f7f;
+  __m128i_out = __lsx_vssrln_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf589caff5605f2fa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000a74aa8a55ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6adeb5dfcb000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrln_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssrln_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003f00000000003f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffc000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff0000;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffcff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x02b504f305a5c091;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x02b504f305a5c091;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000005602d2;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000003f80b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xb327b9363c992b2e;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa1e7b475d925730f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000001ff00;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0060e050007f0160;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0040007fff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1268f057137a0267;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0048137ef886fae0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vssran_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x75b043c4d17db125;
+  *((unsigned long*)& __m128i_op1[0]) = 0xeef8227b4f8017b1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x027c027c000027c0;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000006f00000000;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff994db09c;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffc7639d96;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssran_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssran_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x9);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f80000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x800080007f008000;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000695d00009b8f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000074f20000d272;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001f5400000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000ff0000;
+  __m128i_out = __lsx_vssran_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00010000fffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00010000fffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff00000000;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x31b1777777777776;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6eee282828282829;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000006362ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff801c9e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000810000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x40eff02383e383e4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000007fff;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000c0c00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000a74aa8a55ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6adeb5dfcb000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a7480007fff8000;
+  __m128i_out = __lsx_vssran_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000fe00fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000f50000007500;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007e1600007d98;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe00fe7fffffff;
+  __m128i_out = __lsx_vssran_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f4f4f4f4f0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f007f7f7f00;
+  __m128i_out = __lsx_vssran_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001ffff00000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x2f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x4f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_b_h(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004e005500060031;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff870068fff5ffb3;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004e005500060031;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff870068fff5ffb3;
+  *((unsigned long*)& __m128i_result[1]) = 0x04e00060ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x04e00060ffffffff;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808000008080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080000080800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001010100010100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x2f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000080007f80800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00047fff00007fff;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01fc020000fe0100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000003fc0003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x56);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c03e17edd781b11;
+  *((unsigned long*)& __m128i_op0[0]) = 0x342caf9bffff1fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000040000000400;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0c037fff342c7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x37);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff100fffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff00000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x21);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff100fffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff100fffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x4b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a000a000a000a00;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf2f2e5e5e5e5e5dc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_b_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003fc0;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x22);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x35);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x35);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x41dfffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000083b00000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x33);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x7e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1ff85ffe2ae5d973;
+  *((unsigned long*)& __m128i_op1[1]) = 0x403be000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000ffc2f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00201df000000000;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x29);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000005151515;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000006302e00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f417f417f027e03;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001fd0;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ffffff7f;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x5f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000202fe02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3918371635143312;
+  *((unsigned long*)& __m128i_op1[1]) = 0x21201f1e1d1c1b1a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x480f7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff7fff;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005dcbe7e830c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000001fffff59;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x63);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000007f41;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000002000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x39);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x14ccc6320076a4d2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x685670d27e00682a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x14ccc6320076a4d2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x685670d27e00682a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000000;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc000000fc0003fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbffffff0ffffc00f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000003f0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffc3ffff003e;
+  *((unsigned long*)& __m128i_result[1]) = 0x00c0000000bfffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ffffff;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x800000810000807f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x808080010080007f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x800000810000807f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x808080010080007f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000020000020;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x62);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400400204004002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x6d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2a29282726252423;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000002002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2a29282726252423;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a8009800880078;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000807f00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80006b0080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff00007fff7fff;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000001fe01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000001fe01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f0f0f0f00000000;
+  __m128i_out = __lsx_vssrlni_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_b_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff010300ff0103;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x555500adfffc5cab;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010100000100;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03ff0101fc010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03fffffffc010102;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000300037ff000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003000300a10003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_wu_d(__m128i_op0,__m128i_op1,0x3c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000007070707;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x45);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffdfffcfffdfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffdfffcfffdfffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000053a4f452;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000053a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_b_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000b3a6000067da;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00004e420000c26a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x7a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7c7c000000007176;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x3e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000c6c7;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8d8d8d8d8d8cc6c6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_w_d(__m128i_op0,__m128i_op1,0x3c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000aa822a8228222;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03aa558ec8546eb6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001a64b345308091;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001f2f2cab1c732a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0155ffff754affff;
+  *((unsigned long*)& __m128i_result[0]) = 0x034cffff03e5ffff;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc1bdceee242070dc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe907b754d7eaa478;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_h_w(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_hu_w(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002711350a27112;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00d5701794027113;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_du_q(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000203000010d0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc00300000220;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000090900000998;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrlni_d_q(__m128i_op0,__m128i_op1,0x20);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001000010f8;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff8ffa2fffdffb0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f0f0f0f00000f00;
+  __m128i_out = __lsx_vssrlni_bu_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001802041b0013;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000007f7f02;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff7fffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff7fffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffff7ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x64);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_h_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_h_w(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x47);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0004007c00fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f7f7f7f00107f04;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f0000fd7f0000fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00cf01fe01fe01fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000301de01fe01fe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f00000000000000;
+  __m128i_out = __lsx_vssrani_bu_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe31c86e90cda86f7;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000e3;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc39fffff007fffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fe00fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff0e700000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff0000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_hu_w(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe00fd1400010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f0000007f000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080000180800100;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff7fc01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x82c539ffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc72df14afbfafdf9;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_hu_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000c0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001ffffff29;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000020000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000183fffffe5;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000ff0000;
+  __m128i_out = __lsx_vssrani_bu_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x2a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fbf9;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a000a000a000a00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x4d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f007f007f007f00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000030000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0003003f;
+  __m128i_out = __lsx_vssrani_hu_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_hu_w(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x4c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f000d200e000c20;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007b01ec007b3a9e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fff9fff9;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fff9fffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007ffe7ffe400000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x2a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc485edbcc0000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000c485;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21011f3f193d173b;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff39ff37ff35ff33;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000015d926c7;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000e41b;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c0c0c0c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0014000100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x35);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00003f80000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff46;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x4c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffee00000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3a3a3a3b3a3a3a3a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3a3a00003a3a0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000003a0000003a;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080000068;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000038003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000040033;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_bu_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000007ffc000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffe0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000fff0;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_h_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe000100cf005f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000005e94;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005e96ffffb402;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe000100cf005f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000bd;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001fc0000fffeff;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000002fffffffb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000010000fffb;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000bffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x42);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x79);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80008000ec82ab51;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000800089e08000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000777777777777;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff7777ffff7777;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000003bbbbbbbbbb;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x45);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_h_w(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0007fff800000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6a5d5b056f2f4978;
+  *((unsigned long*)& __m128i_op1[0]) = 0x17483c07141b5971;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xd4bade5e2e902836;
+  __m128i_out = __lsx_vssrani_hu_w(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000000010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_hu_w(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00680486ffffffda;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff913bb9951901;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67157b5100005000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x387c7e0a133f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0c0f000a070f0204;
+  __m128i_out = __lsx_vssrani_bu_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xe);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c7c266e3faa293c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9611c3985b3159f5;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff86dd83ff9611c3;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_w_d(__m128i_op0,__m128i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1010111105050000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4040000041410101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000808000020200;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3b2c8aefd44be966;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x2e34594c3b000000;
+  __m128i_out = __lsx_vssrani_bu_h(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff1afffefec0ec85;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff1aff6d48ce567f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80c400000148;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff80c1ffffe8de;
+  *((unsigned long*)& __m128i_result[1]) = 0xffe3ffd8ffe30919;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffffffff;
+  __m128i_out = __lsx_vssrani_h_w(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1313131313131313;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1313131313131313;
+  *((unsigned long*)& __m128i_op1[1]) = 0x34947b4b11684f92;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd73691661e5b68b4;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x084d1a0907151a3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000007d07fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vssrani_b_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000014eb54ab;
+  *((unsigned long*)& __m128i_op1[0]) = 0x14eb6a002a406a00;
+  *((unsigned long*)& __m128i_result[1]) = 0xe0001fffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_d_q(__m128i_op0,__m128i_op1,0x60);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffaf1500000fffa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000f8a40000f310;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_h_w(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b1b106b8145f50;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_wu_d(__m128i_op0,__m128i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff8ffa2fffdffb0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_du_q(__m128i_op0,__m128i_op1,0x50);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrani_bu_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200020002;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004200a000200001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff00007fff7fff;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00040003ff83ff84;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00040003ff4dffca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000002020202;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffbe6ed563;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0100000001000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff732a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000fbf9;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000007f00000000;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000004fc04f81;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000004fc04f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007f7f00007f7f;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc1000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffc1000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff000000007fff;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffff0000000ad3d;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff000fffff000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff0000;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf001f0010101f002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000dfa6e0c6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000d46cdc13;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80df00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007f7f00007f7f;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff3fbfffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fbf3fbf00007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007f7f7f01027f02;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000004;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffe0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3f413f4100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff7fff;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000100000000fc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000100000000fc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100000001000000;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0c0b0a090b0a0908;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a09080709080706;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000040a04000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000040a04000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00123fff00120012;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0012001200120012;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00003fff00010000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1200091212121212;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0800010001ff8000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2e9028362e902836;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2e9028362e902836;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001000000010;
+  __m128i_out = __lsx_vssrlrn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000024170000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002711350a27112;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00d5701794027113;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0674c886fcba4e98;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfdce8003090b0906;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff001a00000000;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001000000010;
+  __m128i_out = __lsx_vssrlrn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001fffe00014b41;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe0001ffde;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000100020002;
+  __m128i_out = __lsx_vssrlrn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffd24271c4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2711bad1e8e309ed;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0403cfcf01c1595e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x837cd5db43fc55d4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff80007fff;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffcb410000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffeb827ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc1bdceee242070db;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe8c7b756d76aa478;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefd7f7f7f7f7f7e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xdffdbffeba6f5543;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffffff000000ff;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ffffff000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000002010;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00000000000001;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc1bdceee242070db;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe8c7b756d76aa478;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003fffff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff000000ff00;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000007ae567a3e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000700ff00000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0bd80bd80bdfffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0bd80bd80bd80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x006f0efe258ca851;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ffff00;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000f00f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0032000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000007fff;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207f7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff0000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111311111114111;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111311111110000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f417f417f027e03;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9780697084f07dd7;
+  *((unsigned long*)& __m128i_op1[0]) = 0x87e3285243051cf3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fea8ff44;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fea8ff44;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000008000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x13f9c5b60028a415;
+  *((unsigned long*)& __m128i_op0[0]) = 0x545cab1d81a83bea;
+  *((unsigned long*)& __m128i_op1[1]) = 0x13f9c5b60028a415;
+  *((unsigned long*)& __m128i_op1[0]) = 0x545cab1d81a83bea;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0015172b;
+  __m128i_out = __lsx_vssrarn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x14ccc631eb3339ce;
+  *((unsigned long*)& __m128i_op0[0]) = 0x685670d197a98f2e;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000e36400015253;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000035ed0001e000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000e36400015253;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000035ed0001e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1c6c80007fffffff;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000b4a00008808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080800000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrarn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc2fc0000c3040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc2fc0000c3040000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000060000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0600000100000001;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0080006b00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000500000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7efefefe82010201;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff0000ff;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff010300ff0103;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000002ffffffff;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000045340a6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000028404044;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000fffffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000102020204000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x045340a628404044;
+  __m128i_out = __lsx_vssrarn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001400000014;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000adad0000adad;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000052520000adad;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd6a09e662ab46b31;
+  *((unsigned long*)& __m128i_op0[0]) = 0x34b8122ef4054bb3;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c9c9b509be72f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3513f2e3a1774d2c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000501ffff0005;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0021b761002c593c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x002584710016cc56;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff0000ffff;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00020000ffff0001;
+  __m128i_out = __lsx_vssrarn_hu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x004001be00dc008e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1f3f06d4fcba4e98;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2e1135681fa8d951;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000007d07fffffff;
+  __m128i_out = __lsx_vssrarn_w_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0674c8868a74fc80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdce8003090b0906;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000008686;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00008e5680008685;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007fff7fff8000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc7f100004000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c7f14000;
+  __m128i_out = __lsx_vssrarn_h_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4500000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4400000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff000000ff000000;
+  __m128i_out = __lsx_vssrarn_bu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8a8a8a8a8a8a8a8a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8a8a8a8a8a8a8a8a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_wu_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarn_b_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x3d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x4);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808000008080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080000080800000;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vssrlrni_wu_d(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000007f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000000;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010400100203;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0103010301020109;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000110000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000007f00000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0202000402020202;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000200000010000;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x56);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x6d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0001ffff8002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010000400020004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff20ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc0020ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x07fff80000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000007ffe001;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_d_q(__m128i_op0,__m128i_op1,0x7c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e3b94f2ca31;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000001f807b89;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000005050000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0505000005050505;
+  *((unsigned long*)& __m128i_result[1]) = 0x000d02540000007e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001400140014;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x41);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x56a09e662ab46b31;
+  *((unsigned long*)& __m128i_op1[0]) = 0xb4b8122ef4054bb3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x02b504f305a5c091;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x37);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000d000d000d000d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000d000d000d000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000680000006800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005555aaabfffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffffff000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000ab;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x43);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff7fff;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000080;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrni_wu_d(__m128i_op0,__m128i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080000000000;
+  __m128i_out = __lsx_vssrlrni_wu_d(__m128i_op0,__m128i_op1,0x34);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000004f804f81;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000004f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001400000014;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff81007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffb7005f0070007c;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80007e028401;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9a10144000400000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001ffff00010;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x5b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_wu_d(__m128i_op0,__m128i_op1,0x29);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080000000000;
+  __m128i_out = __lsx_vssrlrni_hu_w(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffff9cff05;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff9cfebd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff7ffffef77fffdd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf77edf9cffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x21201f1e1d1c1b1a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x10ff10ff10ff10ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffa6ff91fdd8ef77;
+  *((unsigned long*)& __m128i_op0[0]) = 0x061202bffb141c38;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x010101fe0101fe87;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000004000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_wu_d(__m128i_op0,__m128i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd60001723aa5f8;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007f007f7f;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x808080e280808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080636380806363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x808080e280808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080636380806363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004000400040004;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000d0000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000dffff000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000070007;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000007ffff;
+  __m128i_out = __lsx_vssrlrni_hu_w(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800c00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_hu_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff7fff;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff0100ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0607060700000807;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0707f8f803e8157e;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x31);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x21);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc0808000c0808000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000003020302;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_wu_d(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffc0800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000008080600;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op0[0]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op1[1]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrlrni_hu_w(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003ef89df07f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003ec0fc0fbfe001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff800ff2fe6c00d;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff40408ece0e0de;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4000400040004000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff960001005b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffa500010003;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020000000000000;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x2b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1748c4f9ed1a5870;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrlrni_d_q(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vssrlrni_hu_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4000000040000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_du_q(__m128i_op0,__m128i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x117d7f7b093d187f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000034;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe1bfefe00011ee1;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe1bfe6c03824c60;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f7f7f7f0000001a;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f017f7f7f7f7f;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff3a81ffff89fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffb3c3ffff51ba;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0802080408060803;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff00ffffff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff000900ffff98;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrlrni_w_d(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xc);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000056000056;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000efffefff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa03aa03ae3e2e3e2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_d_q(__m128i_op0,__m128i_op1,0x75);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000760151;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003e0021009a009a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000246d9755;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000003e2427c2ee;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001e5410082727;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007f7f00107f7f;
+  __m128i_out = __lsx_vssrlrni_b_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000f1384;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000004ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssrlrni_bu_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f8000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrlrni_h_w(__m128i_op0,__m128i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff60090958;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0fa96b88d9944d42;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001802041b0013;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x72);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0200020002000200;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_wu_d(__m128i_op0,__m128i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x5c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xda4643d5301c4000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc1fc0d3bf55c4000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7505853d654185f5;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01010000fefe0101;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_wu_d(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00020002000d0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000020f2300ee;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x79);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000073;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000010000002b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000400000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x59);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001800390049ffaa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0029ff96005cff88;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03c0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x03c0038000000380;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f0000000f000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0bef0b880bd80bd8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0bd80bd80bdfffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0bd80bd80bd80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000017b017b01;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x5b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf0800320fff1fa20;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0032000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111113111111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0032000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f417f417f027e03;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe93d0bd19ff0c170;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5237c1bac9eadf55;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x60);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000065a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9941d155f43a9d08;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c0c8b8a8b8b0b0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8b8a8a898a8a8909;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1817161517161514;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1615141315141312;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssrarni_wu_d(__m128i_op0,__m128i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0fffff000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffe00000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x29);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010001000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000080000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x58);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x31);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0d1202e19235e2bc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xea38e0f75f6e56d1;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe500ffffc085;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffc000ffffc005;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100080000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0400400204004002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffe080f6efc100f7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xefd32176ffe100f7;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffe080f6efc100f7;
+  *((unsigned long*)& __m128i_op1[0]) = 0xefd32176ffe100f7;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_wu_d(__m128i_op0,__m128i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000005452505;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000004442403e4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03fc03fc03fc03fc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000b4a00008808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080800000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x71);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2ea268972ea2966a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4026f4ffbc175bff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0fffffff00001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff0fffffff09515;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff00000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000003000000d612;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000bfffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000500000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000c0c0c000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffe1fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7ffffffb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000080008;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1ab6021f72496458;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7750af4954c29940;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1ab6021f72496458;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7750af4954c29940;
+  *((unsigned long*)& __m128i_result[1]) = 0x6ad8ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x6ad8ffffffffffff;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002008300500088;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000088;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1200091212121212;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x51);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_du_q(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_h_w(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000c6c6c6c6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c6c6c6c6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffeff98;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0014ffe4ff76ffc4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x2b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff86dd83ff9611c3;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000035697d4e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000013ecaadf2;
+  *((unsigned long*)& __m128i_result[1]) = 0xe280e67f00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007f7f00007f80;
+  __m128i_out = __lsx_vssrarni_b_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x017001a002c80260;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01d8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2e34594c3b000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssrarni_wu_d(__m128i_op0,__m128i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00060fbf02596848;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020fbf04581ec0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x010169d9010169d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01010287010146a1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200000001;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x06d9090909090909;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x48);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0039d21e3229d4e8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6d339b4f3b439885;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffff000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000d00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffc0000000000000;
+  __m128i_out = __lsx_vssrarni_d_q(__m128i_op0,__m128i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000100000001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x37b951002d81a921;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x3e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000075dbe982;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000071e48cca;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0ebb7d300e3c9199;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vssrarni_w_d(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000930400008a10;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00006f9100007337;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128i_result[1]) = 0x00250023001c001d;
+  *((unsigned long*)& __m128i_result[0]) = 0x309d2f342a5d2b34;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff01ffffe41f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff00000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000155;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000002b;
+  __m128i_out = __lsx_vssrarni_bu_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfee1f6f18800ff7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssrarni_hu_w(__m128i_op0,__m128i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vclo_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe2ecd48adedc7c82;
+  *((unsigned long*)& __m128i_op0[0]) = 0x25d666472b01d18d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0303020102020001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000000000201;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000007070700;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000002010202;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000007e8a60;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000001edde;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x05d0ae6002e8748e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcd1de80217374041;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000001fffff59;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000aaaa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe500ffffc085;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc000ffffc005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001300000012;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001200000012;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001000000000;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000a00000009;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x413e276583869d79;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f7f017f9d8726d3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc090380000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000200000000d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fec20704;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclo_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000200000001c;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000200000001c;
+  __m128i_out = __lsx_vclo_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000800100008;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000001fc1a568;
+  *((unsigned long*)& __m128i_op0[0]) = 0x02693fe0e7beb077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0006000200000000;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f7f000b000b000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000b000b010a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101080408040804;
+  *((unsigned long*)& __m128i_result[0]) = 0x0804080407040804;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1ffffffff8001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf0bd80bd80bd8000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010000fe7c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010000fe01;
+  *((unsigned long*)& __m128i_result[1]) = 0x000f000f00100000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000f000f00100000;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x41dfffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100000008080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vclz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000039;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000039;
+  __m128i_out = __lsx_vclz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff000100ff00fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff003000ff00a0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0008000f00080008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0008000a00080008;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000bffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vclz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000c0c00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vclz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x687a8373f249bc44;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7861145d9241a14a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101030100010001;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vclz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080700000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000f0000000f;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000008000001e;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000200000001b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000000;
+  __m128i_out = __lsx_vclz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080805;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080805;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vclz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000000000;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vclz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000003c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000800000008;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0701000007010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0701000000000000;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x807f7f8000ffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff00feff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0107070100080800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080800070800;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_result[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_result[0]) = 0x0303030303030303;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000100010;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_op0[0]) = 0x803f800080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000009;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000000010000;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0ba00ba00ba00ba0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0ba00ba00ba011eb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000a0000000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000a0000000d;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfbfbfb17fbfb38ea;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfbfb47fbfbfb0404;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000029;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffbfc0ffffbfc0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000032;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e19181716;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003000900050007;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0800080008000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe160065422d476da;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000d00000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000b00000010;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001000000000;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0103000201030002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000008;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000200000001e;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000200000001e;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbbe5560400010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe7e5dabf00010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000b000500010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x000b000c00010001;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001f0000001f;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000600007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000008ffffa209;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000016;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000467fef81;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000013;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fe03fe01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fe01fe01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000007020701;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000007010701;
+  __m128i_out = __lsx_vpcnt_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f80000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b1b106b8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000e0000000e;
+  __m128i_out = __lsx_vpcnt_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe001ffffe001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe001ffffe001;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000038335ca2777;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000800800000;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0982e2daf234ed87;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf51df8dbd6050189;
+  *((unsigned long*)& __m128i_result[0]) = 0x0983e2dbf235ed87;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fcfffe01fd01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5d5d5d5d5d5d5d55;
+  *((unsigned long*)& __m128i_result[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe00fcfffe21fd01;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff7fc01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x80000000fff7fc01;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffe00000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff01010105;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001c00ffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010201808040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010280808040;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f8000003f800001;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f8000003f800001;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000010a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000104000800;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000408;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000100;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff994cb09c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffc3639d96;
+  *((unsigned long*)& __m128i_op1[1]) = 0x20de27761210386d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x34632935195a123c;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff994db09c;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffc7639d96;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000545cab1d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000081a83bea;
+  *((unsigned long*)& __m128i_op1[1]) = 0x13f9c5b60028a415;
+  *((unsigned long*)& __m128i_op1[0]) = 0x545cab1d81a83bea;
+  *((unsigned long*)& __m128i_result[1]) = 0x00400000547cab1d;
+  *((unsigned long*)& __m128i_result[0]) = 0x2000000081a83fea;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000038003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000040033;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100080000;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0909090909090909;
+  *((unsigned long*)& __m128i_result[0]) = 0x0909090909090909;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00a600e000a600e0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01500178010000f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100000001000000;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfefbff06fffa0004;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfefeff04fffd0004;
+  *((unsigned long*)& __m128i_result[1]) = 0x4008804080040110;
+  *((unsigned long*)& __m128i_result[0]) = 0x4040801080200110;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8101010181010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x8101010181010101;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101030101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101030101;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd78cfd70b5f65d76;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5779108fdedda7e4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xd78cfd70b5f65d77;
+  *((unsigned long*)& __m128i_result[0]) = 0x5779108fdedda7e5;
+  __m128i_out = __lsx_vbitset_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00004a1e00004a1e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x4000000040000000;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0007000000050000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080000100200001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0008000200020002;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff80ffff7e02;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00feff8000ff80ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0280000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff81ffff7f03;
+  *((unsigned long*)& __m128i_result[0]) = 0x04ffff8101ff81ff;
+  __m128i_out = __lsx_vbitset_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4480000044800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x45c0000044800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x4481000144810001;
+  *((unsigned long*)& __m128i_result[0]) = 0x45c04000c4808000;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3bc000003a800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x3a8100013a810001;
+  *((unsigned long*)& __m128i_result[0]) = 0x7bc04000ba808000;
+  __m128i_out = __lsx_vbitset_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000cecd00004657;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000c90000011197;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000200000800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100800000;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f8000017f800001;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8000017f800001;
+  __m128i_out = __lsx_vbitset_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1b71a083b3dec3cd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x373a13323b4cdbc1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0802010808400820;
+  *((unsigned long*)& __m128i_result[0]) = 0x8004080408100802;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitrev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000800080008000;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000501000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000008;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000040100;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010400100203;
+  *((unsigned long*)& __m128i_result[0]) = 0x0103010301020109;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffbe6ed563;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd0b1ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9d519ee8d2d84f1d;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefd7f7f7f7f7f7e;
+  *((unsigned long*)& __m128i_result[0]) = 0xdffdbffeba6f5543;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7da9b23a624082fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x2002040404010420;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010180800101;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fffe0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001ffff0001fffe;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0e7ffffc01fffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000003f803f4;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000000010000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100100000;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x040004000400040d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0501050105010501;
+  *((unsigned long*)& __m128i_result[0]) = 0x050105010501050c;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitrev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010001fffe;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000007f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ffffffeffffffff;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0040000000400000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0040000000400000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0141010101410101;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x65b780a3ae3bf8cb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x161d0c363c200826;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x65b780a2ae3bf8ca;
+  *((unsigned long*)& __m128i_result[0]) = 0x161d0c373c200827;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe01fe01fe01fe01;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000003bfb4000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000021ffffffdf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000e60;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1ff85ffe2ae5d973;
+  *((unsigned long*)& __m128i_result[1]) = 0x00010020fffeffde;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100400100200e68;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000001021;
+  *((unsigned long*)& __m128i_result[1]) = 0x0108020410400208;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010102;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff0000ff86;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x010101fe0101fe87;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x343d8dc5b0ed5a08;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x353c8cc4b1ec5b09;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0037ffc8d7ff2800;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ffffff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fffe00fffffe00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0137ffc9d7fe2801;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f00ff017fffff01;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001200100012001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffe7fffffff;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000010000000;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffdfffcfffdfffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffdfffcfffdfffc;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001ffff0101ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0103fefd0303fefd;
+  *((unsigned long*)& __m128i_result[0]) = 0x0103fefd0103fefd;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6a5d5b056f2f4978;
+  *((unsigned long*)& __m128i_op1[0]) = 0x17483c07141b5971;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002001000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000008000020000;
+  __m128i_out = __lsx_vbitrev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffefffefffe;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001ce28f9c0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000004e06b0890;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefefefdbffefdfe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefefeeffef7fefe;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003ffffe00800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff810001ff810002;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f804000ff810001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff1affff01001fe0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff1aff6d02834d70;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000034;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe1bfefe00011ee1;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe1bfe6c03824c60;
+  __m128i_out = __lsx_vbitrev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41945926d8000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00001e5410082727;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007f7f00107f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001001001000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x4195d926d8018000;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f8100017f810001;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8100017f810001;
+  __m128i_out = __lsx_vbitrev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x545501550001113a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xd45501550001113a;
+  __m128i_out = __lsx_vbitrev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000e0000000e0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00e0000000e00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000e0000000e0;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000004000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff8004000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x19df307a5d04acbb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5ed032b06bde1ab6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x19de307a5d04acba;
+  *((unsigned long*)& __m128i_result[0]) = 0x5ed032b06bde1ab6;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0018001800180018;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0018001800180018;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd8248069ffe78077;
+  *((unsigned long*)& __m128i_op1[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd83c8081ffff808f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xd82480697f678077;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000006597cc3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7505853d654185f5;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01010000fefe0101;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000006595cc1d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffe0000fffe0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffe0000fffe0000;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80000000fff7fc01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x80000000fff6fc00;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffefffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffef800;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffefffffffe;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x23b57fa16d39f7c8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x161c0c363c200824;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000ffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000ffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000fefe00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fefe00000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1ffffffff8001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf0bd80bd80bd8000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7ffffffefffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xdfffdfffdffffffe;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000037;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000036;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100000001007c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000000010000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefa000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefa000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67b7cf643c9d636a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39d70e366f547977;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x66b34f643c9c626a;
+  *((unsigned long*)& __m128i_result[0]) = 0x38d60e366e547876;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_result[0]) = 0x2020202020207f7f;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x7ef8000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ef8000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000077f97;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffeff7f0000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x14ccc6320176a4d2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x685670d37e80682a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x14ccc6320076a4d2;
+  *((unsigned long*)& __m128i_result[0]) = 0x685670d27e00682a;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001b4a00007808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100010001000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000100;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x5d7f5d007f6a007f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffefffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffefffe;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x207fffff22bd04fb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x207fffff22bd04fb;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000002000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000002000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x207fffff22bd04fa;
+  *((unsigned long*)& __m128i_result[0]) = 0x207fffff22bd04fa;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffefffefffe;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007fff7fff8000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000b81c8382;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000077af9450;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007efe7f7f8000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000667ae56;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000004ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000667ae56;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vbitclr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020002000200020;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0040000000ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040000000000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x54beed87bc3f2be1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8024d8f6a494afcb;
+  *((unsigned long*)& __m128i_result[1]) = 0x54feed87bc3f2be1;
+  *((unsigned long*)& __m128i_result[0]) = 0x8064d8f6a494afcb;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000c400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x001000100010c410;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3b2c8aefd44be966;
+  *((unsigned long*)& __m128i_result[1]) = 0x3e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_result[0]) = 0x3b2c8aefd44be966;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040004017fda869;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x800000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x800000ff080000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000000010000;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0004000000040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004000000040000;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0982e2daf234ed87;
+  *((unsigned long*)& __m128i_result[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_result[0]) = 0x0982eadaf234ed87;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x2b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000000000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x31);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000006;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000006;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080000000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x2b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000010000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000030000003f;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe5e5e5e5e5e5e5e5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe5e5e5e5e4e4e46d;
+  *((unsigned long*)& __m128i_result[1]) = 0xe5e5e5e5e5e5e5e5;
+  *((unsigned long*)& __m128i_result[0]) = 0xe5e5e5e5e4e4e46d;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100010001000;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0800080008000800;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100000001000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100000001000000;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_result[0]) = 0x2020202020207fff;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000900000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000900013fa0;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x23);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ff0008000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x40f3fa8000800080;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000040000000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x2a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m128i_result[0]) = 0xc404040404040404;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000040804000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000040804000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000040a04000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000040a04000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f81e3779b97f4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff02000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1f81e3779b97f4a8;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000008000000080;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100010001000101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100010001000101;
+  __m128i_out = __lsx_vbitseti_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000010000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000010000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002711250a27112;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00d2701294027112;
+  *((unsigned long*)& __m128i_result[1]) = 0x080a791a58aa791a;
+  *((unsigned long*)& __m128i_result[0]) = 0x08da781a9c0a791a;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_result[1]) = 0x1313131313131313;
+  *((unsigned long*)& __m128i_result[0]) = 0x1313131313131313;
+  __m128i_out = __lsx_vbitseti_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000000000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000000;
+  __m128i_out = __lsx_vbitseti_d(__m128i_op0,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff0008000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff0008000000080;
+  __m128i_out = __lsx_vbitseti_w(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000003004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000400000007004;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfeffffffffffffff;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x38);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4000400040004000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4000400040004000;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007fff8000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001008100000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0800080077ff8800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0801088108000805;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m128i_result[0]) = 0x0202020202020202;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe86ce7eb5e9ce950;
+  *((unsigned long*)& __m128i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m128i_result[0]) = 0xec68e3ef5a98ed54;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000400000204010;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0400040004000400;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffff02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x04000400fbfffb02;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000000100000;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x040004000400040d;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000004f804f81;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000004f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000004fc04f81;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000004fc04f80;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040004000400040;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m128i_result[0]) = 0xefefefefefefefef;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[0]) = 0x4040404040404040;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e1d1c1b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128i_result[0]) = 0x3918371635143312;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x61608654a2d4f6da;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff0800080008000;
+  *((unsigned long*)& __m128i_result[0]) = 0xe160065422d476da;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x77c0401040004000;
+  *((unsigned long*)& __m128i_result[0]) = 0x77c0401040004000;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x77c0404a4000403a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x77c03fd640003fc6;
+  *((unsigned long*)& __m128i_result[1]) = 0x75c0404a4200403a;
+  *((unsigned long*)& __m128i_result[0]) = 0x75c03fd642003fc6;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808280808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808280808;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000100fffffeff;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0404050404040404;
+  *((unsigned long*)& __m128i_result[0]) = 0x0404050404040404;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100010001000;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m128i_result[0]) = 0xbfbfbfbfbfbfbfbf;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000040000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000040000000;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000020000;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x2000200020002000;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x441ba9fcffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x181b2541ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x401fadf8fbfbfbfb;
+  *((unsigned long*)& __m128i_result[0]) = 0x1c1f2145fbfbfbfb;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffefff00001000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffefff00001000;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000002000;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010000000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000100;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd6a09e662ab46b31;
+  *((unsigned long*)& __m128i_op0[0]) = 0x34b8122ef4054bb3;
+  *((unsigned long*)& __m128i_result[1]) = 0xd6e09e262af46b71;
+  *((unsigned long*)& __m128i_result[0]) = 0x34f8126ef4454bf3;
+  __m128i_out = __lsx_vbitrevi_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000200008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000200000;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefefefdbffefdfe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefefeeffef7feff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfcfcfcffbdfcfffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfcfcfcedfcf5fcfd;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000555889;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002580f01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010000000455889;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000002480f01;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00060fbf00040fbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020fbf00000fbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x00060fbf02040fbf;
+  *((unsigned long*)& __m128i_result[0]) = 0x00020fbf02000fbf;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x400000003fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x4000000040000000;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030298a6a1030a49;
+  *((unsigned long*)& __m128i_result[1]) = 0x00197f26cb658837;
+  *((unsigned long*)& __m128i_result[0]) = 0x01009aa4a301084b;
+  __m128i_out = __lsx_vbitrevi_b(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c6c60000c6c6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000c6c58000c6b2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000c6c40000c6c6;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000c6c78000c6b2;
+  __m128i_out = __lsx_vbitrevi_d(__m128i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff7fffffff7f;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff7fffffff7f;
+  __m128i_out = __lsx_vbitrevi_w(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000201000000000b;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004200a000200000;
+  *((unsigned long*)& __m128i_result[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x004200a000200000;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000efffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000002ff5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc2cf2471e9b7d7a4;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000027f5;
+  *((unsigned long*)& __m128i_result[0]) = 0xc2cf2471e9b7d7a4;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_result[1]) = 0x7404443064403aec;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000d6eefefc0498;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x64b680a2ae3af8ca;
+  *((unsigned long*)& __m128i_op0[0]) = 0x161c0c363c200826;
+  *((unsigned long*)& __m128i_result[1]) = 0x64b680a2ae3af8c8;
+  *((unsigned long*)& __m128i_result[0]) = 0x161c0c363c200824;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbfffbfffbfffbffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff807f807f807f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff807f807f807f80;
+  *((unsigned long*)& __m128i_result[1]) = 0xfb807b807b807b80;
+  *((unsigned long*)& __m128i_result[0]) = 0xfb807b807b807b80;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100010001000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfbffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfbffffffffffffff;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9941d1d5f4ba9d08;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x9941d155f43a9d08;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffbfffffffbf;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03f1e3d28b1a8a1a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x03f1e3d28b1a8a1a;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffda6f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffe3d7;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefffffffeffda6f;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefffffffeffe3d7;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x26);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x30);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080638063;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080638063;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004000400040004;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_d(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000200008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000200008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000200000;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200000001;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xdfdfdfdfdfdfdfdf;
+  *((unsigned long*)& __m128i_result[0]) = 0xdfdfdfdfdfdfdfdf;
+  __m128i_out = __lsx_vbitclri_b(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitclri_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-builtin.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-builtin.c
new file mode 100644
index 00000000000..70f5000b29f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-builtin.c
@@ -0,0 +1,1461 @@
+/* Test builtins for LOONGARCH LSX ASE instructions */
+/* { dg-do compile } */
+/* { dg-options "-mlsx" } */
+/* { dg-final { scan-assembler-times "lsx_vsll_b:.*vsll\\.b.*lsx_vsll_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsll_h:.*vsll\\.h.*lsx_vsll_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsll_w:.*vsll\\.w.*lsx_vsll_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsll_d:.*vsll\\.d.*lsx_vsll_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslli_b:.*vslli\\.b.*lsx_vslli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslli_h:.*vslli\\.h.*lsx_vslli_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslli_w:.*vslli\\.w.*lsx_vslli_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslli_d:.*vslli\\.d.*lsx_vslli_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsra_b:.*vsra\\.b.*lsx_vsra_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsra_h:.*vsra\\.h.*lsx_vsra_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsra_w:.*vsra\\.w.*lsx_vsra_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsra_d:.*vsra\\.d.*lsx_vsra_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrai_b:.*vsrai\\.b.*lsx_vsrai_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrai_h:.*vsrai\\.h.*lsx_vsrai_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrai_w:.*vsrai\\.w.*lsx_vsrai_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrai_d:.*vsrai\\.d.*lsx_vsrai_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrar_b:.*vsrar\\.b.*lsx_vsrar_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrar_h:.*vsrar\\.h.*lsx_vsrar_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrar_w:.*vsrar\\.w.*lsx_vsrar_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrar_d:.*vsrar\\.d.*lsx_vsrar_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrari_b:.*vsrari\\.b.*lsx_vsrari_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrari_h:.*vsrari\\.h.*lsx_vsrari_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrari_w:.*vsrari\\.w.*lsx_vsrari_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrari_d:.*vsrari\\.d.*lsx_vsrari_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrl_b:.*vsrl\\.b.*lsx_vsrl_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrl_h:.*vsrl\\.h.*lsx_vsrl_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrl_w:.*vsrl\\.w.*lsx_vsrl_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrl_d:.*vsrl\\.d.*lsx_vsrl_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrli_b:.*vsrli\\.b.*lsx_vsrli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrli_h:.*vsrli\\.h.*lsx_vsrli_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrli_w:.*vsrli\\.w.*lsx_vsrli_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrli_d:.*vsrli\\.d.*lsx_vsrli_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlr_b:.*vsrlr\\.b.*lsx_vsrlr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlr_h:.*vsrlr\\.h.*lsx_vsrlr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlr_w:.*vsrlr\\.w.*lsx_vsrlr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlr_d:.*vsrlr\\.d.*lsx_vsrlr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlri_b:.*vsrlri\\.b.*lsx_vsrlri_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlri_h:.*vsrlri\\.h.*lsx_vsrlri_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlri_w:.*vsrlri\\.w.*lsx_vsrlri_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlri_d:.*vsrlri\\.d.*lsx_vsrlri_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclr_b:.*vbitclr\\.b.*lsx_vbitclr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclr_h:.*vbitclr\\.h.*lsx_vbitclr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclr_w:.*vbitclr\\.w.*lsx_vbitclr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclr_d:.*vbitclr\\.d.*lsx_vbitclr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclri_b:.*vbitclri\\.b.*lsx_vbitclri_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclri_h:.*vbitclri\\.h.*lsx_vbitclri_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclri_w:.*vbitclri\\.w.*lsx_vbitclri_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitclri_d:.*vbitclri\\.d.*lsx_vbitclri_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitset_b:.*vbitset\\.b.*lsx_vbitset_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitset_h:.*vbitset\\.h.*lsx_vbitset_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitset_w:.*vbitset\\.w.*lsx_vbitset_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitset_d:.*vbitset\\.d.*lsx_vbitset_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitseti_b:.*vbitseti\\.b.*lsx_vbitseti_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitseti_h:.*vbitseti\\.h.*lsx_vbitseti_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitseti_w:.*vbitseti\\.w.*lsx_vbitseti_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitseti_d:.*vbitseti\\.d.*lsx_vbitseti_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrev_b:.*vbitrev\\.b.*lsx_vbitrev_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrev_h:.*vbitrev\\.h.*lsx_vbitrev_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrev_w:.*vbitrev\\.w.*lsx_vbitrev_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrev_d:.*vbitrev\\.d.*lsx_vbitrev_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrevi_b:.*vbitrevi\\.b.*lsx_vbitrevi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrevi_h:.*vbitrevi\\.h.*lsx_vbitrevi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrevi_w:.*vbitrevi\\.w.*lsx_vbitrevi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitrevi_d:.*vbitrevi\\.d.*lsx_vbitrevi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadd_b:.*vadd\\.b.*lsx_vadd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadd_h:.*vadd\\.h.*lsx_vadd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadd_w:.*vadd\\.w.*lsx_vadd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadd_d:.*vadd\\.d.*lsx_vadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddi_bu:.*vaddi\\.bu.*lsx_vaddi_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddi_hu:.*vaddi\\.hu.*lsx_vaddi_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddi_wu:.*vaddi\\.wu.*lsx_vaddi_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddi_du:.*vaddi\\.du.*lsx_vaddi_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsub_b:.*vsub\\.b.*lsx_vsub_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsub_h:.*vsub\\.h.*lsx_vsub_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsub_w:.*vsub\\.w.*lsx_vsub_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsub_d:.*vsub\\.d.*lsx_vsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubi_bu:.*vsubi\\.bu.*lsx_vsubi_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubi_hu:.*vsubi\\.hu.*lsx_vsubi_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubi_wu:.*vsubi\\.wu.*lsx_vsubi_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubi_du:.*vsubi\\.du.*lsx_vsubi_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_b:.*vmax\\.b.*lsx_vmax_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_h:.*vmax\\.h.*lsx_vmax_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_w:.*vmax\\.w.*lsx_vmax_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_d:.*vmax\\.d.*lsx_vmax_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_b:.*vmaxi\\.b.*lsx_vmaxi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_h:.*vmaxi\\.h.*lsx_vmaxi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_w:.*vmaxi\\.w.*lsx_vmaxi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_d:.*vmaxi\\.d.*lsx_vmaxi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_bu:.*vmax\\.bu.*lsx_vmax_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_hu:.*vmax\\.hu.*lsx_vmax_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_wu:.*vmax\\.wu.*lsx_vmax_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmax_du:.*vmax\\.du.*lsx_vmax_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_bu:.*vmaxi\\.bu.*lsx_vmaxi_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_hu:.*vmaxi\\.hu.*lsx_vmaxi_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_wu:.*vmaxi\\.wu.*lsx_vmaxi_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaxi_du:.*vmaxi\\.du.*lsx_vmaxi_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_b:.*vmin\\.b.*lsx_vmin_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_h:.*vmin\\.h.*lsx_vmin_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_w:.*vmin\\.w.*lsx_vmin_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_d:.*vmin\\.d.*lsx_vmin_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_b:.*vmini\\.b.*lsx_vmini_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_h:.*vmini\\.h.*lsx_vmini_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_w:.*vmini\\.w.*lsx_vmini_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_d:.*vmini\\.d.*lsx_vmini_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_bu:.*vmin\\.bu.*lsx_vmin_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_hu:.*vmin\\.hu.*lsx_vmin_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_wu:.*vmin\\.wu.*lsx_vmin_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmin_du:.*vmin\\.du.*lsx_vmin_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_bu:.*vmini\\.bu.*lsx_vmini_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_hu:.*vmini\\.hu.*lsx_vmini_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_wu:.*vmini\\.wu.*lsx_vmini_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmini_du:.*vmini\\.du.*lsx_vmini_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseq_b:.*vseq\\.b.*lsx_vseq_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseq_h:.*vseq\\.h.*lsx_vseq_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseq_w:.*vseq\\.w.*lsx_vseq_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseq_d:.*vseq\\.d.*lsx_vseq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseqi_b:.*vseqi\\.b.*lsx_vseqi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseqi_h:.*vseqi\\.h.*lsx_vseqi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseqi_w:.*vseqi\\.w.*lsx_vseqi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vseqi_d:.*vseqi\\.d.*lsx_vseqi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_b:.*vslti\\.b.*lsx_vslti_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_b:.*vslt\\.b.*lsx_vslt_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_h:.*vslt\\.h.*lsx_vslt_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_w:.*vslt\\.w.*lsx_vslt_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_d:.*vslt\\.d.*lsx_vslt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_h:.*vslti\\.h.*lsx_vslti_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_w:.*vslti\\.w.*lsx_vslti_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_d:.*vslti\\.d.*lsx_vslti_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_bu:.*vslt\\.bu.*lsx_vslt_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_hu:.*vslt\\.hu.*lsx_vslt_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_wu:.*vslt\\.wu.*lsx_vslt_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslt_du:.*vslt\\.du.*lsx_vslt_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_bu:.*vslti\\.bu.*lsx_vslti_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_hu:.*vslti\\.hu.*lsx_vslti_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_wu:.*vslti\\.wu.*lsx_vslti_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslti_du:.*vslti\\.du.*lsx_vslti_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_b:.*vsle\\.b.*lsx_vsle_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_h:.*vsle\\.h.*lsx_vsle_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_w:.*vsle\\.w.*lsx_vsle_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_d:.*vsle\\.d.*lsx_vsle_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_b:.*vslei\\.b.*lsx_vslei_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_h:.*vslei\\.h.*lsx_vslei_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_w:.*vslei\\.w.*lsx_vslei_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_d:.*vslei\\.d.*lsx_vslei_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_bu:.*vsle\\.bu.*lsx_vsle_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_hu:.*vsle\\.hu.*lsx_vsle_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_wu:.*vsle\\.wu.*lsx_vsle_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsle_du:.*vsle\\.du.*lsx_vsle_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_bu:.*vslei\\.bu.*lsx_vslei_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_hu:.*vslei\\.hu.*lsx_vslei_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_wu:.*vslei\\.wu.*lsx_vslei_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vslei_du:.*vslei\\.du.*lsx_vslei_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_b:.*vsat\\.b.*lsx_vsat_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_h:.*vsat\\.h.*lsx_vsat_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_w:.*vsat\\.w.*lsx_vsat_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_d:.*vsat\\.d.*lsx_vsat_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_bu:.*vsat\\.bu.*lsx_vsat_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_hu:.*vsat\\.hu.*lsx_vsat_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_wu:.*vsat\\.wu.*lsx_vsat_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsat_du:.*vsat\\.du.*lsx_vsat_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadda_b:.*vadda\\.b.*lsx_vadda_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadda_h:.*vadda\\.h.*lsx_vadda_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadda_w:.*vadda\\.w.*lsx_vadda_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadda_d:.*vadda\\.d.*lsx_vadda_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_b:.*vsadd\\.b.*lsx_vsadd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_h:.*vsadd\\.h.*lsx_vsadd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_w:.*vsadd\\.w.*lsx_vsadd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_d:.*vsadd\\.d.*lsx_vsadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_bu:.*vsadd\\.bu.*lsx_vsadd_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_hu:.*vsadd\\.hu.*lsx_vsadd_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_wu:.*vsadd\\.wu.*lsx_vsadd_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsadd_du:.*vsadd\\.du.*lsx_vsadd_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_b:.*vavg\\.b.*lsx_vavg_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_h:.*vavg\\.h.*lsx_vavg_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_w:.*vavg\\.w.*lsx_vavg_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_d:.*vavg\\.d.*lsx_vavg_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_bu:.*vavg\\.bu.*lsx_vavg_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_hu:.*vavg\\.hu.*lsx_vavg_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_wu:.*vavg\\.wu.*lsx_vavg_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavg_du:.*vavg\\.du.*lsx_vavg_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_b:.*vavgr\\.b.*lsx_vavgr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_h:.*vavgr\\.h.*lsx_vavgr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_w:.*vavgr\\.w.*lsx_vavgr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_d:.*vavgr\\.d.*lsx_vavgr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_bu:.*vavgr\\.bu.*lsx_vavgr_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_hu:.*vavgr\\.hu.*lsx_vavgr_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_wu:.*vavgr\\.wu.*lsx_vavgr_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vavgr_du:.*vavgr\\.du.*lsx_vavgr_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_b:.*vssub\\.b.*lsx_vssub_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_h:.*vssub\\.h.*lsx_vssub_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_w:.*vssub\\.w.*lsx_vssub_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_d:.*vssub\\.d.*lsx_vssub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_bu:.*vssub\\.bu.*lsx_vssub_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_hu:.*vssub\\.hu.*lsx_vssub_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_wu:.*vssub\\.wu.*lsx_vssub_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssub_du:.*vssub\\.du.*lsx_vssub_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_b:.*vabsd\\.b.*lsx_vabsd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_h:.*vabsd\\.h.*lsx_vabsd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_w:.*vabsd\\.w.*lsx_vabsd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_d:.*vabsd\\.d.*lsx_vabsd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_bu:.*vabsd\\.bu.*lsx_vabsd_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_hu:.*vabsd\\.hu.*lsx_vabsd_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_wu:.*vabsd\\.wu.*lsx_vabsd_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vabsd_du:.*vabsd\\.du.*lsx_vabsd_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmul_b:.*vmul\\.b.*lsx_vmul_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmul_h:.*vmul\\.h.*lsx_vmul_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmul_w:.*vmul\\.w.*lsx_vmul_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmul_d:.*vmul\\.d.*lsx_vmul_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmadd_b:.*vmadd\\.b.*lsx_vmadd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmadd_h:.*vmadd\\.h.*lsx_vmadd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmadd_w:.*vmadd\\.w.*lsx_vmadd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmadd_d:.*vmadd\\.d.*lsx_vmadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmsub_b:.*vmsub\\.b.*lsx_vmsub_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmsub_h:.*vmsub\\.h.*lsx_vmsub_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmsub_w:.*vmsub\\.w.*lsx_vmsub_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmsub_d:.*vmsub\\.d.*lsx_vmsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_b:.*vdiv\\.b.*lsx_vdiv_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_h:.*vdiv\\.h.*lsx_vdiv_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_w:.*vdiv\\.w.*lsx_vdiv_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_d:.*vdiv\\.d.*lsx_vdiv_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_bu:.*vdiv\\.bu.*lsx_vdiv_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_hu:.*vdiv\\.hu.*lsx_vdiv_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_wu:.*vdiv\\.wu.*lsx_vdiv_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vdiv_du:.*vdiv\\.du.*lsx_vdiv_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_h_b:.*vhaddw\\.h\\.b.*lsx_vhaddw_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_w_h:.*vhaddw\\.w\\.h.*lsx_vhaddw_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_d_w:.*vhaddw\\.d\\.w.*lsx_vhaddw_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_hu_bu:.*vhaddw\\.hu\\.bu.*lsx_vhaddw_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_wu_hu:.*vhaddw\\.wu\\.hu.*lsx_vhaddw_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_du_wu:.*vhaddw\\.du\\.wu.*lsx_vhaddw_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_h_b:.*vhsubw\\.h\\.b.*lsx_vhsubw_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_w_h:.*vhsubw\\.w\\.h.*lsx_vhsubw_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_d_w:.*vhsubw\\.d\\.w.*lsx_vhsubw_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_hu_bu:.*vhsubw\\.hu\\.bu.*lsx_vhsubw_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_wu_hu:.*vhsubw\\.wu\\.hu.*lsx_vhsubw_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_du_wu:.*vhsubw\\.du\\.wu.*lsx_vhsubw_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_b:.*vmod\\.b.*lsx_vmod_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_h:.*vmod\\.h.*lsx_vmod_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_w:.*vmod\\.w.*lsx_vmod_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_d:.*vmod\\.d.*lsx_vmod_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_bu:.*vmod\\.bu.*lsx_vmod_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_hu:.*vmod\\.hu.*lsx_vmod_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_wu:.*vmod\\.wu.*lsx_vmod_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmod_du:.*vmod\\.du.*lsx_vmod_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplve_b:.*vreplve\\.b.*lsx_vreplve_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplve_h:.*vreplve\\.h.*lsx_vreplve_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplve_w:.*vreplve\\.w.*lsx_vreplve_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplve_d:.*vreplve\\.d.*lsx_vreplve_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplvei_b:.*vreplvei\\.b.*lsx_vreplvei_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplvei_h:.*vreplvei\\.h.*lsx_vreplvei_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplvei_w:.*vreplvei\\.w.*lsx_vreplvei_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplvei_d:.*vreplvei\\.d.*lsx_vreplvei_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickev_b:.*vpickev\\.b.*lsx_vpickev_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickev_h:.*vpickev\\.h.*lsx_vpickev_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickev_w:.*vpickev\\.w.*lsx_vpickev_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickev_d:.*vilvl\\.d.*lsx_vpickev_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickod_b:.*vpickod\\.b.*lsx_vpickod_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickod_h:.*vpickod\\.h.*lsx_vpickod_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickod_w:.*vpickod\\.w.*lsx_vpickod_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickod_d:.*vilvh\\.d.*lsx_vpickod_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvh_b:.*vilvh\\.b.*lsx_vilvh_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvh_h:.*vilvh\\.h.*lsx_vilvh_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvh_w:.*vilvh\\.w.*lsx_vilvh_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvh_d:.*vilvh\\.d.*lsx_vilvh_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvl_b:.*vilvl\\.b.*lsx_vilvl_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvl_h:.*vilvl\\.h.*lsx_vilvl_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvl_w:.*vilvl\\.w.*lsx_vilvl_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vilvl_d:.*vilvl\\.d.*lsx_vilvl_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackev_b:.*vpackev\\.b.*lsx_vpackev_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackev_h:.*vpackev\\.h.*lsx_vpackev_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackev_w:.*vpackev\\.w.*lsx_vpackev_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackev_d:.*vilvl\\.d.*lsx_vpackev_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackod_b:.*vpackod\\.b.*lsx_vpackod_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackod_h:.*vpackod\\.h.*lsx_vpackod_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackod_w:.*vpackod\\.w.*lsx_vpackod_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpackod_d:.*vilvh\\.d.*lsx_vpackod_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf_h:.*vshuf\\.h.*lsx_vshuf_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf_w:.*vshuf\\.w.*lsx_vshuf_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf_d:.*vshuf\\.d.*lsx_vshuf_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vand_v:.*vand\\.v.*lsx_vand_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vandi_b:.*vandi\\.b.*lsx_vandi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vor_v:.*vor\\.v.*lsx_vor_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vori_b:.*vbitseti\\.b.*lsx_vori_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vnor_v:.*vnor\\.v.*lsx_vnor_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vnori_b:.*vnori\\.b.*lsx_vnori_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vxor_v:.*vxor\\.v.*lsx_vxor_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vxori_b:.*vbitrevi\\.b.*lsx_vxori_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitsel_v:.*vbitsel\\.v.*lsx_vbitsel_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbitseli_b:.*vbitseli\\.b.*lsx_vbitseli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf4i_b:.*vshuf4i\\.b.*lsx_vshuf4i_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf4i_h:.*vshuf4i\\.h.*lsx_vshuf4i_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf4i_w:.*vshuf4i\\.w.*lsx_vshuf4i_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplgr2vr_b:.*vreplgr2vr\\.b.*lsx_vreplgr2vr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplgr2vr_h:.*vreplgr2vr\\.h.*lsx_vreplgr2vr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplgr2vr_w:.*vreplgr2vr\\.w.*lsx_vreplgr2vr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vreplgr2vr_d:.*vreplgr2vr\\.d.*lsx_vreplgr2vr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpcnt_b:.*vpcnt\\.b.*lsx_vpcnt_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpcnt_h:.*vpcnt\\.h.*lsx_vpcnt_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpcnt_w:.*vpcnt\\.w.*lsx_vpcnt_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpcnt_d:.*vpcnt\\.d.*lsx_vpcnt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclo_b:.*vclo\\.b.*lsx_vclo_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclo_h:.*vclo\\.h.*lsx_vclo_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclo_w:.*vclo\\.w.*lsx_vclo_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclo_d:.*vclo\\.d.*lsx_vclo_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclz_b:.*vclz\\.b.*lsx_vclz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclz_h:.*vclz\\.h.*lsx_vclz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclz_w:.*vclz\\.w.*lsx_vclz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vclz_d:.*vclz\\.d.*lsx_vclz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_b:.*vpickve2gr\\.b.*lsx_vpickve2gr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_h:.*vpickve2gr\\.h.*lsx_vpickve2gr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_w:.*vpickve2gr\\.w.*lsx_vpickve2gr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_d:.*vpickve2gr\\.d.*lsx_vpickve2gr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_bu:.*vpickve2gr\\.bu.*lsx_vpickve2gr_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_hu:.*vpickve2gr\\.hu.*lsx_vpickve2gr_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_wu:.*vpickve2gr\\.wu.*lsx_vpickve2gr_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpickve2gr_du:.*vpickve2gr\\.du.*lsx_vpickve2gr_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vinsgr2vr_b:.*vinsgr2vr\\.b.*lsx_vinsgr2vr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vinsgr2vr_h:.*vinsgr2vr\\.h.*lsx_vinsgr2vr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vinsgr2vr_w:.*vinsgr2vr\\.w.*lsx_vinsgr2vr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vinsgr2vr_d:.*vinsgr2vr\\.d.*lsx_vinsgr2vr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfadd_s:.*vfadd\\.s.*lsx_vfadd_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfadd_d:.*vfadd\\.d.*lsx_vfadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfsub_s:.*vfsub\\.s.*lsx_vfsub_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfsub_d:.*vfsub\\.d.*lsx_vfsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmul_s:.*vfmul\\.s.*lsx_vfmul_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmul_d:.*vfmul\\.d.*lsx_vfmul_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfdiv_s:.*vfdiv\\.s.*lsx_vfdiv_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfdiv_d:.*vfdiv\\.d.*lsx_vfdiv_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcvt_h_s:.*vfcvt\\.h\\.s.*lsx_vfcvt_h_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcvt_s_d:.*vfcvt\\.s\\.d.*lsx_vfcvt_s_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmin_s:.*vfmin\\.s.*lsx_vfmin_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmin_d:.*vfmin\\.d.*lsx_vfmin_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmina_s:.*vfmina\\.s.*lsx_vfmina_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmina_d:.*vfmina\\.d.*lsx_vfmina_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmax_s:.*vfmax\\.s.*lsx_vfmax_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmax_d:.*vfmax\\.d.*lsx_vfmax_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmaxa_s:.*vfmaxa\\.s.*lsx_vfmaxa_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmaxa_d:.*vfmaxa\\.d.*lsx_vfmaxa_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfclass_s:.*vfclass\\.s.*lsx_vfclass_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfclass_d:.*vfclass\\.d.*lsx_vfclass_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfsqrt_s:.*vfsqrt\\.s.*lsx_vfsqrt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfsqrt_d:.*vfsqrt\\.d.*lsx_vfsqrt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrecip_s:.*vfrecip\\.s.*lsx_vfrecip_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrecip_d:.*vfrecip\\.d.*lsx_vfrecip_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrint_s:.*vfrint\\.s.*lsx_vfrint_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrint_d:.*vfrint\\.d.*lsx_vfrint_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrsqrt_s:.*vfrsqrt\\.s.*lsx_vfrsqrt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrsqrt_d:.*vfrsqrt\\.d.*lsx_vfrsqrt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vflogb_s:.*vflogb\\.s.*lsx_vflogb_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vflogb_d:.*vflogb\\.d.*lsx_vflogb_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcvth_s_h:.*vfcvth\\.s\\.h.*lsx_vfcvth_s_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcvth_d_s:.*vfcvth\\.d\\.s.*lsx_vfcvth_d_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcvtl_s_h:.*vfcvtl\\.s\\.h.*lsx_vfcvtl_s_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcvtl_d_s:.*vfcvtl\\.d\\.s.*lsx_vfcvtl_d_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftint_w_s:.*vftint\\.w\\.s.*lsx_vftint_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftint_l_d:.*vftint\\.l\\.d.*lsx_vftint_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftint_wu_s:.*vftint\\.wu\\.s.*lsx_vftint_wu_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftint_lu_d:.*vftint\\.lu\\.d.*lsx_vftint_lu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrz_w_s:.*vftintrz\\.w\\.s.*lsx_vftintrz_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrz_l_d:.*vftintrz\\.l\\.d.*lsx_vftintrz_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrz_wu_s:.*vftintrz\\.wu\\.s.*lsx_vftintrz_wu_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrz_lu_d:.*vftintrz\\.lu\\.d.*lsx_vftintrz_lu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffint_s_w:.*vffint\\.s\\.w.*lsx_vffint_s_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffint_d_l:.*vffint\\.d\\.l.*lsx_vffint_d_l" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffint_s_wu:.*vffint\\.s\\.wu.*lsx_vffint_s_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffint_d_lu:.*vffint\\.d\\.lu.*lsx_vffint_d_lu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vandn_v:.*vandn\\.v.*lsx_vandn_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vneg_b:.*vneg\\.b.*lsx_vneg_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vneg_h:.*vneg\\.h.*lsx_vneg_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vneg_w:.*vneg\\.w.*lsx_vneg_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vneg_d:.*vneg\\.d.*lsx_vneg_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_b:.*vmuh\\.b.*lsx_vmuh_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_h:.*vmuh\\.h.*lsx_vmuh_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_w:.*vmuh\\.w.*lsx_vmuh_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_d:.*vmuh\\.d.*lsx_vmuh_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_bu:.*vmuh\\.bu.*lsx_vmuh_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_hu:.*vmuh\\.hu.*lsx_vmuh_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_wu:.*vmuh\\.wu.*lsx_vmuh_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmuh_du:.*vmuh\\.du.*lsx_vmuh_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsllwil_h_b:.*vsllwil\\.h\\.b.*lsx_vsllwil_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsllwil_w_h:.*vsllwil\\.w\\.h.*lsx_vsllwil_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsllwil_d_w:.*vsllwil\\.d\\.w.*lsx_vsllwil_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsllwil_hu_bu:.*vsllwil\\.hu\\.bu.*lsx_vsllwil_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsllwil_wu_hu:.*vsllwil\\.wu\\.hu.*lsx_vsllwil_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsllwil_du_wu:.*vsllwil\\.du\\.wu.*lsx_vsllwil_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsran_b_h:.*vsran\\.b\\.h.*lsx_vsran_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsran_h_w:.*vsran\\.h\\.w.*lsx_vsran_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsran_w_d:.*vsran\\.w\\.d.*lsx_vsran_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssran_b_h:.*vssran\\.b\\.h.*lsx_vssran_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssran_h_w:.*vssran\\.h\\.w.*lsx_vssran_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssran_w_d:.*vssran\\.w\\.d.*lsx_vssran_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssran_bu_h:.*vssran\\.bu\\.h.*lsx_vssran_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssran_hu_w:.*vssran\\.hu\\.w.*lsx_vssran_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssran_wu_d:.*vssran\\.wu\\.d.*lsx_vssran_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarn_b_h:.*vsrarn\\.b\\.h.*lsx_vsrarn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarn_h_w:.*vsrarn\\.h\\.w.*lsx_vsrarn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarn_w_d:.*vsrarn\\.w\\.d.*lsx_vsrarn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarn_b_h:.*vssrarn\\.b\\.h.*lsx_vssrarn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarn_h_w:.*vssrarn\\.h\\.w.*lsx_vssrarn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarn_w_d:.*vssrarn\\.w\\.d.*lsx_vssrarn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarn_bu_h:.*vssrarn\\.bu\\.h.*lsx_vssrarn_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarn_hu_w:.*vssrarn\\.hu\\.w.*lsx_vssrarn_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarn_wu_d:.*vssrarn\\.wu\\.d.*lsx_vssrarn_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrln_b_h:.*vsrln\\.b\\.h.*lsx_vsrln_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrln_h_w:.*vsrln\\.h\\.w.*lsx_vsrln_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrln_w_d:.*vsrln\\.w\\.d.*lsx_vsrln_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrln_bu_h:.*vssrln\\.bu\\.h.*lsx_vssrln_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrln_hu_w:.*vssrln\\.hu\\.w.*lsx_vssrln_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrln_wu_d:.*vssrln\\.wu\\.d.*lsx_vssrln_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrn_b_h:.*vsrlrn\\.b\\.h.*lsx_vsrlrn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrn_h_w:.*vsrlrn\\.h\\.w.*lsx_vsrlrn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrn_w_d:.*vsrlrn\\.w\\.d.*lsx_vsrlrn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrn_bu_h:.*vssrlrn\\.bu\\.h.*lsx_vssrlrn_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrn_hu_w:.*vssrlrn\\.hu\\.w.*lsx_vssrlrn_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrn_wu_d:.*vssrlrn\\.wu\\.d.*lsx_vssrlrn_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrstpi_b:.*vfrstpi\\.b.*lsx_vfrstpi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrstpi_h:.*vfrstpi\\.h.*lsx_vfrstpi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrstp_b:.*vfrstp\\.b.*lsx_vfrstp_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrstp_h:.*vfrstp\\.h.*lsx_vfrstp_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf4i_d:.*vshuf4i\\.d.*lsx_vshuf4i_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbsrl_v:.*vbsrl\\.v.*lsx_vbsrl_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vbsll_v:.*vbsll\\.v.*lsx_vbsll_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vextrins_b:.*vextrins\\.b.*lsx_vextrins_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vextrins_h:.*vextrins\\.h.*lsx_vextrins_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vextrins_w:.*vextrins\\.w.*lsx_vextrins_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vextrins_d:.*vextrins\\.d.*lsx_vextrins_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmskltz_b:.*vmskltz\\.b.*lsx_vmskltz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmskltz_h:.*vmskltz\\.h.*lsx_vmskltz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmskltz_w:.*vmskltz\\.w.*lsx_vmskltz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmskltz_d:.*vmskltz\\.d.*lsx_vmskltz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsigncov_b:.*vsigncov\\.b.*lsx_vsigncov_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsigncov_h:.*vsigncov\\.h.*lsx_vsigncov_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsigncov_w:.*vsigncov\\.w.*lsx_vsigncov_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsigncov_d:.*vsigncov\\.d.*lsx_vsigncov_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmadd_s:.*vfmadd\\.s.*lsx_vfmadd_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmadd_d:.*vfmadd\\.d.*lsx_vfmadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmsub_s:.*vfmsub\\.s.*lsx_vfmsub_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfmsub_d:.*vfmsub\\.d.*lsx_vfmsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfnmadd_s:.*vfnmadd\\.s.*lsx_vfnmadd_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfnmadd_d:.*vfnmadd\\.d.*lsx_vfnmadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfnmsub_s:.*vfnmsub\\.s.*lsx_vfnmsub_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfnmsub_d:.*vfnmsub\\.d.*lsx_vfnmsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrne_w_s:.*vftintrne\\.w\\.s.*lsx_vftintrne_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrne_l_d:.*vftintrne\\.l\\.d.*lsx_vftintrne_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrp_w_s:.*vftintrp\\.w\\.s.*lsx_vftintrp_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrp_l_d:.*vftintrp\\.l\\.d.*lsx_vftintrp_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrm_w_s:.*vftintrm\\.w\\.s.*lsx_vftintrm_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrm_l_d:.*vftintrm\\.l\\.d.*lsx_vftintrm_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftint_w_d:.*vftint\\.w\\.d.*lsx_vftint_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffint_s_l:.*vffint\\.s\\.l.*lsx_vffint_s_l" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrz_w_d:.*vftintrz\\.w\\.d.*lsx_vftintrz_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrp_w_d:.*vftintrp\\.w\\.d.*lsx_vftintrp_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrm_w_d:.*vftintrm\\.w\\.d.*lsx_vftintrm_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrne_w_d:.*vftintrne\\.w\\.d.*lsx_vftintrne_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintl_l_s:.*vftintl\\.l\\.s.*lsx_vftintl_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftinth_l_s:.*vftinth\\.l\\.s.*lsx_vftinth_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffinth_d_w:.*vffinth\\.d\\.w.*lsx_vffinth_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vffintl_d_w:.*vffintl\\.d\\.w.*lsx_vffintl_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrzl_l_s:.*vftintrzl\\.l\\.s.*lsx_vftintrzl_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrzh_l_s:.*vftintrzh\\.l\\.s.*lsx_vftintrzh_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrpl_l_s:.*vftintrpl\\.l\\.s.*lsx_vftintrpl_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrph_l_s:.*vftintrph\\.l\\.s.*lsx_vftintrph_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrml_l_s:.*vftintrml\\.l\\.s.*lsx_vftintrml_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrmh_l_s:.*vftintrmh\\.l\\.s.*lsx_vftintrmh_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrnel_l_s:.*vftintrnel\\.l\\.s.*lsx_vftintrnel_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vftintrneh_l_s:.*vftintrneh\\.l\\.s.*lsx_vftintrneh_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrne_s:.*vfrintrne\\.s.*lsx_vfrintrne_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrne_d:.*vfrintrne\\.d.*lsx_vfrintrne_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrz_s:.*vfrintrz\\.s.*lsx_vfrintrz_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrz_d:.*vfrintrz\\.d.*lsx_vfrintrz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrp_s:.*vfrintrp\\.s.*lsx_vfrintrp_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrp_d:.*vfrintrp\\.d.*lsx_vfrintrp_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrm_s:.*vfrintrm\\.s.*lsx_vfrintrm_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfrintrm_d:.*vfrintrm\\.d.*lsx_vfrintrm_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vstelm_b:.*vstelm\\.b.*lsx_vstelm_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vstelm_h:.*vstelm\\.h.*lsx_vstelm_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vstelm_w:.*vstelm\\.w.*lsx_vstelm_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vstelm_d:.*vstelm\\.d.*lsx_vstelm_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_d_w:.*vaddwev\\.d\\.w.*lsx_vaddwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_w_h:.*vaddwev\\.w\\.h.*lsx_vaddwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_h_b:.*vaddwev\\.h\\.b.*lsx_vaddwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_d_w:.*vaddwod\\.d\\.w.*lsx_vaddwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_w_h:.*vaddwod\\.w\\.h.*lsx_vaddwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_h_b:.*vaddwod\\.h\\.b.*lsx_vaddwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_d_wu:.*vaddwev\\.d\\.wu.*lsx_vaddwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_w_hu:.*vaddwev\\.w\\.hu.*lsx_vaddwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_h_bu:.*vaddwev\\.h\\.bu.*lsx_vaddwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_d_wu:.*vaddwod\\.d\\.wu.*lsx_vaddwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_w_hu:.*vaddwod\\.w\\.hu.*lsx_vaddwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_h_bu:.*vaddwod\\.h\\.bu.*lsx_vaddwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_d_wu_w:.*vaddwev\\.d\\.wu\\.w.*lsx_vaddwev_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_w_hu_h:.*vaddwev\\.w\\.hu\\.h.*lsx_vaddwev_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_h_bu_b:.*vaddwev\\.h\\.bu\\.b.*lsx_vaddwev_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_d_wu_w:.*vaddwod\\.d\\.wu\\.w.*lsx_vaddwod_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_w_hu_h:.*vaddwod\\.w\\.hu\\.h.*lsx_vaddwod_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_h_bu_b:.*vaddwod\\.h\\.bu\\.b.*lsx_vaddwod_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_d_w:.*vsubwev\\.d\\.w.*lsx_vsubwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_w_h:.*vsubwev\\.w\\.h.*lsx_vsubwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_h_b:.*vsubwev\\.h\\.b.*lsx_vsubwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_d_w:.*vsubwod\\.d\\.w.*lsx_vsubwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_w_h:.*vsubwod\\.w\\.h.*lsx_vsubwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_h_b:.*vsubwod\\.h\\.b.*lsx_vsubwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_d_wu:.*vsubwev\\.d\\.wu.*lsx_vsubwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_w_hu:.*vsubwev\\.w\\.hu.*lsx_vsubwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_h_bu:.*vsubwev\\.h\\.bu.*lsx_vsubwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_d_wu:.*vsubwod\\.d\\.wu.*lsx_vsubwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_w_hu:.*vsubwod\\.w\\.hu.*lsx_vsubwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_h_bu:.*vsubwod\\.h\\.bu.*lsx_vsubwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_q_d:.*vaddwev\\.q\\.d.*lsx_vaddwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_q_d:.*vaddwod\\.q\\.d.*lsx_vaddwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_q_du:.*vaddwev\\.q\\.du.*lsx_vaddwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_q_du:.*vaddwod\\.q\\.du.*lsx_vaddwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_q_d:.*vsubwev\\.q\\.d.*lsx_vsubwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_q_d:.*vsubwod\\.q\\.d.*lsx_vsubwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwev_q_du:.*vsubwev\\.q\\.du.*lsx_vsubwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsubwod_q_du:.*vsubwod\\.q\\.du.*lsx_vsubwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwev_q_du_d:.*vaddwev\\.q\\.du\\.d.*lsx_vaddwev_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vaddwod_q_du_d:.*vaddwod\\.q\\.du\\.d.*lsx_vaddwod_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_d_w:.*vmulwev\\.d\\.w.*lsx_vmulwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_w_h:.*vmulwev\\.w\\.h.*lsx_vmulwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_h_b:.*vmulwev\\.h\\.b.*lsx_vmulwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_d_w:.*vmulwod\\.d\\.w.*lsx_vmulwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_w_h:.*vmulwod\\.w\\.h.*lsx_vmulwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_h_b:.*vmulwod\\.h\\.b.*lsx_vmulwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_d_wu:.*vmulwev\\.d\\.wu.*lsx_vmulwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_w_hu:.*vmulwev\\.w\\.hu.*lsx_vmulwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_h_bu:.*vmulwev\\.h\\.bu.*lsx_vmulwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_d_wu:.*vmulwod\\.d\\.wu.*lsx_vmulwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_w_hu:.*vmulwod\\.w\\.hu.*lsx_vmulwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_h_bu:.*vmulwod\\.h\\.bu.*lsx_vmulwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_d_wu_w:.*vmulwev\\.d\\.wu\\.w.*lsx_vmulwev_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_w_hu_h:.*vmulwev\\.w\\.hu\\.h.*lsx_vmulwev_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_h_bu_b:.*vmulwev\\.h\\.bu\\.b.*lsx_vmulwev_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_d_wu_w:.*vmulwod\\.d\\.wu\\.w.*lsx_vmulwod_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_w_hu_h:.*vmulwod\\.w\\.hu\\.h.*lsx_vmulwod_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_h_bu_b:.*vmulwod\\.h\\.bu\\.b.*lsx_vmulwod_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_q_d:.*vmulwev\\.q\\.d.*lsx_vmulwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_q_d:.*vmulwod\\.q\\.d.*lsx_vmulwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_q_du:.*vmulwev\\.q\\.du.*lsx_vmulwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_q_du:.*vmulwod\\.q\\.du.*lsx_vmulwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwev_q_du_d:.*vmulwev\\.q\\.du\\.d.*lsx_vmulwev_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmulwod_q_du_d:.*vmulwod\\.q\\.du\\.d.*lsx_vmulwod_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_q_d:.*vhaddw\\.q\\.d.*lsx_vhaddw_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhaddw_qu_du:.*vhaddw\\.qu\\.du.*lsx_vhaddw_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_q_d:.*vhsubw\\.q\\.d.*lsx_vhsubw_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vhsubw_qu_du:.*vhsubw\\.qu\\.du.*lsx_vhsubw_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_d_w:.*vmaddwev\\.d\\.w.*lsx_vmaddwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_w_h:.*vmaddwev\\.w\\.h.*lsx_vmaddwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_h_b:.*vmaddwev\\.h\\.b.*lsx_vmaddwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_d_wu:.*vmaddwev\\.d\\.wu.*lsx_vmaddwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_w_hu:.*vmaddwev\\.w\\.hu.*lsx_vmaddwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_h_bu:.*vmaddwev\\.h\\.bu.*lsx_vmaddwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_d_w:.*vmaddwod\\.d\\.w.*lsx_vmaddwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_w_h:.*vmaddwod\\.w\\.h.*lsx_vmaddwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_h_b:.*vmaddwod\\.h\\.b.*lsx_vmaddwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_d_wu:.*vmaddwod\\.d\\.wu.*lsx_vmaddwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_w_hu:.*vmaddwod\\.w\\.hu.*lsx_vmaddwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_h_bu:.*vmaddwod\\.h\\.bu.*lsx_vmaddwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_d_wu_w:.*vmaddwev\\.d\\.wu\\.w.*lsx_vmaddwev_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_w_hu_h:.*vmaddwev\\.w\\.hu\\.h.*lsx_vmaddwev_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_h_bu_b:.*vmaddwev\\.h\\.bu\\.b.*lsx_vmaddwev_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_d_wu_w:.*vmaddwod\\.d\\.wu\\.w.*lsx_vmaddwod_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_w_hu_h:.*vmaddwod\\.w\\.hu\\.h.*lsx_vmaddwod_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_h_bu_b:.*vmaddwod\\.h\\.bu\\.b.*lsx_vmaddwod_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_q_d:.*vmaddwev\\.q\\.d.*lsx_vmaddwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_q_d:.*vmaddwod\\.q\\.d.*lsx_vmaddwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_q_du:.*vmaddwev\\.q\\.du.*lsx_vmaddwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_q_du:.*vmaddwod\\.q\\.du.*lsx_vmaddwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwev_q_du_d:.*vmaddwev\\.q\\.du\\.d.*lsx_vmaddwev_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmaddwod_q_du_d:.*vmaddwod\\.q\\.du\\.d.*lsx_vmaddwod_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotr_b:.*vrotr\\.b.*lsx_vrotr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotr_h:.*vrotr\\.h.*lsx_vrotr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotr_w:.*vrotr\\.w.*lsx_vrotr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotr_d:.*vrotr\\.d.*lsx_vrotr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vadd_q:.*vadd\\.q.*lsx_vadd_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsub_q:.*vsub\\.q.*lsx_vsub_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vldrepl_b:.*vldrepl\\.b.*lsx_vldrepl_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vldrepl_h:.*vldrepl\\.h.*lsx_vldrepl_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vldrepl_w:.*vldrepl\\.w.*lsx_vldrepl_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vldrepl_d:.*vldrepl\\.d.*lsx_vldrepl_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmskgez_b:.*vmskgez\\.b.*lsx_vmskgez_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vmsknz_b:.*vmsknz\\.b.*lsx_vmsknz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_h_b:.*vexth\\.h\\.b.*lsx_vexth_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_w_h:.*vexth\\.w\\.h.*lsx_vexth_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_d_w:.*vexth\\.d\\.w.*lsx_vexth_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_q_d:.*vexth\\.q\\.d.*lsx_vexth_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_hu_bu:.*vexth\\.hu\\.bu.*lsx_vexth_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_wu_hu:.*vexth\\.wu\\.hu.*lsx_vexth_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_du_wu:.*vexth\\.du\\.wu.*lsx_vexth_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vexth_qu_du:.*vexth\\.qu\\.du.*lsx_vexth_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotri_b:.*vrotri\\.b.*lsx_vrotri_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotri_h:.*vrotri\\.h.*lsx_vrotri_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotri_w:.*vrotri\\.w.*lsx_vrotri_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrotri_d:.*vrotri\\.d.*lsx_vrotri_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vextl_q_d:.*vextl\\.q\\.d.*lsx_vextl_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlni_b_h:.*vsrlni\\.b\\.h.*lsx_vsrlni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlni_h_w:.*vsrlni\\.h\\.w.*lsx_vsrlni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlni_w_d:.*vsrlni\\.w\\.d.*lsx_vsrlni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlni_d_q:.*vsrlni\\.d\\.q.*lsx_vsrlni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrni_b_h:.*vsrlrni\\.b\\.h.*lsx_vsrlrni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrni_h_w:.*vsrlrni\\.h\\.w.*lsx_vsrlrni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrni_w_d:.*vsrlrni\\.w\\.d.*lsx_vsrlrni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrlrni_d_q:.*vsrlrni\\.d\\.q.*lsx_vsrlrni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_b_h:.*vssrlni\\.b\\.h.*lsx_vssrlni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_h_w:.*vssrlni\\.h\\.w.*lsx_vssrlni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_w_d:.*vssrlni\\.w\\.d.*lsx_vssrlni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_d_q:.*vssrlni\\.d\\.q.*lsx_vssrlni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_bu_h:.*vssrlni\\.bu\\.h.*lsx_vssrlni_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_hu_w:.*vssrlni\\.hu\\.w.*lsx_vssrlni_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_wu_d:.*vssrlni\\.wu\\.d.*lsx_vssrlni_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlni_du_q:.*vssrlni\\.du\\.q.*lsx_vssrlni_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_b_h:.*vssrlrni\\.b\\.h.*lsx_vssrlrni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_h_w:.*vssrlrni\\.h\\.w.*lsx_vssrlrni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_w_d:.*vssrlrni\\.w\\.d.*lsx_vssrlrni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_d_q:.*vssrlrni\\.d\\.q.*lsx_vssrlrni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_bu_h:.*vssrlrni\\.bu\\.h.*lsx_vssrlrni_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_hu_w:.*vssrlrni\\.hu\\.w.*lsx_vssrlrni_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_wu_d:.*vssrlrni\\.wu\\.d.*lsx_vssrlrni_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrni_du_q:.*vssrlrni\\.du\\.q.*lsx_vssrlrni_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrani_b_h:.*vsrani\\.b\\.h.*lsx_vsrani_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrani_h_w:.*vsrani\\.h\\.w.*lsx_vsrani_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrani_w_d:.*vsrani\\.w\\.d.*lsx_vsrani_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrani_d_q:.*vsrani\\.d\\.q.*lsx_vsrani_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarni_b_h:.*vsrarni\\.b\\.h.*lsx_vsrarni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarni_h_w:.*vsrarni\\.h\\.w.*lsx_vsrarni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarni_w_d:.*vsrarni\\.w\\.d.*lsx_vsrarni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vsrarni_d_q:.*vsrarni\\.d\\.q.*lsx_vsrarni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_b_h:.*vssrani\\.b\\.h.*lsx_vssrani_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_h_w:.*vssrani\\.h\\.w.*lsx_vssrani_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_w_d:.*vssrani\\.w\\.d.*lsx_vssrani_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_d_q:.*vssrani\\.d\\.q.*lsx_vssrani_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_bu_h:.*vssrani\\.bu\\.h.*lsx_vssrani_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_hu_w:.*vssrani\\.hu\\.w.*lsx_vssrani_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_wu_d:.*vssrani\\.wu\\.d.*lsx_vssrani_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrani_du_q:.*vssrani\\.du\\.q.*lsx_vssrani_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_b_h:.*vssrarni\\.b\\.h.*lsx_vssrarni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_h_w:.*vssrarni\\.h\\.w.*lsx_vssrarni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_w_d:.*vssrarni\\.w\\.d.*lsx_vssrarni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_d_q:.*vssrarni\\.d\\.q.*lsx_vssrarni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_bu_h:.*vssrarni\\.bu\\.h.*lsx_vssrarni_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_hu_w:.*vssrarni\\.hu\\.w.*lsx_vssrarni_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_wu_d:.*vssrarni\\.wu\\.d.*lsx_vssrarni_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrarni_du_q:.*vssrarni\\.du\\.q.*lsx_vssrarni_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vpermi_w:.*vpermi\\.w.*lsx_vpermi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vld:.*vld.*lsx_vld" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vst:.*vst.*lsx_vst" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrn_b_h:.*vssrlrn\\.b\\.h.*lsx_vssrlrn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrn_h_w:.*vssrlrn\\.h\\.w.*lsx_vssrlrn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrlrn_w_d:.*vssrlrn\\.w\\.d.*lsx_vssrlrn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrln_b_h:.*vssrln\\.b\\.h.*lsx_vssrln_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrln_h_w:.*vssrln\\.h\\.w.*lsx_vssrln_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vssrln_w_d:.*vssrln\\.w\\.d.*lsx_vssrln_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vorn_v:.*vorn\\.v.*lsx_vorn_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vldi:.*vldi.*lsx_vldi" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vshuf_b:.*vshuf\\.b.*lsx_vshuf_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vldx:.*vldx.*lsx_vldx" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vstx:.*vstx.*lsx_vstx" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vextl_qu_du:.*vextl\\.qu\\.du.*lsx_vextl_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bnz_b:.*vsetanyeqz\\.b.*lsx_bnz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bnz_d:.*vsetanyeqz\\.d.*lsx_bnz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bnz_h:.*vsetanyeqz\\.h.*lsx_bnz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bnz_v:.*vseteqz\\.v.*lsx_bnz_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bnz_w:.*vsetanyeqz\\.w.*lsx_bnz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bz_b:.*vsetallnez\\.b.*lsx_bz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bz_d:.*vsetallnez\\.d.*lsx_bz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bz_h:.*vsetallnez\\.h.*lsx_bz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bz_v:.*vsetnez\\.v.*lsx_bz_v" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_bz_w:.*vsetallnez\\.w.*lsx_bz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_caf_d:.*vfcmp\\.caf\\.d.*lsx_vfcmp_caf_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_caf_s:.*vfcmp\\.caf\\.s.*lsx_vfcmp_caf_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_ceq_d:.*vfcmp\\.ceq\\.d.*lsx_vfcmp_ceq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_ceq_s:.*vfcmp\\.ceq\\.s.*lsx_vfcmp_ceq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cle_d:.*vfcmp\\.cle\\.d.*lsx_vfcmp_cle_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cle_s:.*vfcmp\\.cle\\.s.*lsx_vfcmp_cle_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_clt_d:.*vfcmp\\.clt\\.d.*lsx_vfcmp_clt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_clt_s:.*vfcmp\\.clt\\.s.*lsx_vfcmp_clt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cne_d:.*vfcmp\\.cne\\.d.*lsx_vfcmp_cne_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cne_s:.*vfcmp\\.cne\\.s.*lsx_vfcmp_cne_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cor_d:.*vfcmp\\.cor\\.d.*lsx_vfcmp_cor_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cor_s:.*vfcmp\\.cor\\.s.*lsx_vfcmp_cor_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cueq_d:.*vfcmp\\.cueq\\.d.*lsx_vfcmp_cueq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cueq_s:.*vfcmp\\.cueq\\.s.*lsx_vfcmp_cueq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cule_d:.*vfcmp\\.cule\\.d.*lsx_vfcmp_cule_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cule_s:.*vfcmp\\.cule\\.s.*lsx_vfcmp_cule_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cult_d:.*vfcmp\\.cult\\.d.*lsx_vfcmp_cult_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cult_s:.*vfcmp\\.cult\\.s.*lsx_vfcmp_cult_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cun_d:.*vfcmp\\.cun\\.d.*lsx_vfcmp_cun_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cune_d:.*vfcmp\\.cune\\.d.*lsx_vfcmp_cune_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cune_s:.*vfcmp\\.cune\\.s.*lsx_vfcmp_cune_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_cun_s:.*vfcmp\\.cun\\.s.*lsx_vfcmp_cun_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_saf_d:.*vfcmp\\.saf\\.d.*lsx_vfcmp_saf_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_saf_s:.*vfcmp\\.saf\\.s.*lsx_vfcmp_saf_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_seq_d:.*vfcmp\\.seq\\.d.*lsx_vfcmp_seq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_seq_s:.*vfcmp\\.seq\\.s.*lsx_vfcmp_seq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sle_d:.*vfcmp\\.sle\\.d.*lsx_vfcmp_sle_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sle_s:.*vfcmp\\.sle\\.s.*lsx_vfcmp_sle_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_slt_d:.*vfcmp\\.slt\\.d.*lsx_vfcmp_slt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_slt_s:.*vfcmp\\.slt\\.s.*lsx_vfcmp_slt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sne_d:.*vfcmp\\.sne\\.d.*lsx_vfcmp_sne_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sne_s:.*vfcmp\\.sne\\.s.*lsx_vfcmp_sne_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sor_d:.*vfcmp\\.sor\\.d.*lsx_vfcmp_sor_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sor_s:.*vfcmp\\.sor\\.s.*lsx_vfcmp_sor_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sueq_d:.*vfcmp\\.sueq\\.d.*lsx_vfcmp_sueq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sueq_s:.*vfcmp\\.sueq\\.s.*lsx_vfcmp_sueq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sule_d:.*vfcmp\\.sule\\.d.*lsx_vfcmp_sule_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sule_s:.*vfcmp\\.sule\\.s.*lsx_vfcmp_sule_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sult_d:.*vfcmp\\.sult\\.d.*lsx_vfcmp_sult_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sult_s:.*vfcmp\\.sult\\.s.*lsx_vfcmp_sult_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sun_d:.*vfcmp\\.sun\\.d.*lsx_vfcmp_sun_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sune_d:.*vfcmp\\.sune\\.d.*lsx_vfcmp_sune_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sune_s:.*vfcmp\\.sune\\.s.*lsx_vfcmp_sune_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vfcmp_sun_s:.*vfcmp\\.sun\\.s.*lsx_vfcmp_sun_s" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrepli_b:.*vrepli\\.b.*lsx_vrepli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrepli_d:.*vrepli\\.d.*lsx_vrepli_d" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrepli_h:.*vrepli\\.h.*lsx_vrepli_h" 1 } } */
+/* { dg-final { scan-assembler-times "lsx_vrepli_w:.*vrepli\\.w.*lsx_vrepli_w" 1 } } */
+
+typedef signed char v16i8 __attribute__ ((vector_size(16), aligned(16)));
+typedef signed char v16i8_b __attribute__ ((vector_size(16), aligned(1)));
+typedef unsigned char v16u8 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned char v16u8_b __attribute__ ((vector_size(16), aligned(1)));
+typedef short v8i16 __attribute__ ((vector_size(16), aligned(16)));
+typedef short v8i16_h __attribute__ ((vector_size(16), aligned(2)));
+typedef unsigned short v8u16 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned short v8u16_h __attribute__ ((vector_size(16), aligned(2)));
+typedef int v4i32 __attribute__ ((vector_size(16), aligned(16)));
+typedef int v4i32_w __attribute__ ((vector_size(16), aligned(4)));
+typedef unsigned int v4u32 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned int v4u32_w __attribute__ ((vector_size(16), aligned(4)));
+typedef long long v2i64 __attribute__ ((vector_size(16), aligned(16)));
+typedef long long v2i64_d __attribute__ ((vector_size(16), aligned(8)));
+typedef unsigned long long v2u64 __attribute__ ((vector_size(16), aligned(16)));
+typedef unsigned long long v2u64_d __attribute__ ((vector_size(16), aligned(8)));
+typedef float v4f32 __attribute__ ((vector_size(16), aligned(16)));
+typedef float v4f32_w __attribute__ ((vector_size(16), aligned(4)));
+typedef double v2f64 __attribute__ ((vector_size(16), aligned(16)));
+typedef double v2f64_d __attribute__ ((vector_size(16), aligned(8)));
+
+typedef long long __m128i __attribute__ ((__vector_size__ (16), __may_alias__));
+typedef float __m128 __attribute__ ((__vector_size__ (16), __may_alias__));
+typedef double __m128d __attribute__ ((__vector_size__ (16), __may_alias__));
+
+v16i8 __lsx_vsll_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsll_b(_1, _2);}
+v8i16 __lsx_vsll_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsll_h(_1, _2);}
+v4i32 __lsx_vsll_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsll_w(_1, _2);}
+v2i64 __lsx_vsll_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsll_d(_1, _2);}
+v16i8 __lsx_vslli_b(v16i8 _1){return __builtin_lsx_vslli_b(_1, 1);}
+v8i16 __lsx_vslli_h(v8i16 _1){return __builtin_lsx_vslli_h(_1, 1);}
+v4i32 __lsx_vslli_w(v4i32 _1){return __builtin_lsx_vslli_w(_1, 1);}
+v2i64 __lsx_vslli_d(v2i64 _1){return __builtin_lsx_vslli_d(_1, 1);}
+v16i8 __lsx_vsra_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsra_b(_1, _2);}
+v8i16 __lsx_vsra_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsra_h(_1, _2);}
+v4i32 __lsx_vsra_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsra_w(_1, _2);}
+v2i64 __lsx_vsra_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsra_d(_1, _2);}
+v16i8 __lsx_vsrai_b(v16i8 _1){return __builtin_lsx_vsrai_b(_1, 1);}
+v8i16 __lsx_vsrai_h(v8i16 _1){return __builtin_lsx_vsrai_h(_1, 1);}
+v4i32 __lsx_vsrai_w(v4i32 _1){return __builtin_lsx_vsrai_w(_1, 1);}
+v2i64 __lsx_vsrai_d(v2i64 _1){return __builtin_lsx_vsrai_d(_1, 1);}
+v16i8 __lsx_vsrar_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrar_b(_1, _2);}
+v8i16 __lsx_vsrar_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrar_h(_1, _2);}
+v4i32 __lsx_vsrar_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrar_w(_1, _2);}
+v2i64 __lsx_vsrar_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrar_d(_1, _2);}
+v16i8 __lsx_vsrari_b(v16i8 _1){return __builtin_lsx_vsrari_b(_1, 1);}
+v8i16 __lsx_vsrari_h(v8i16 _1){return __builtin_lsx_vsrari_h(_1, 1);}
+v4i32 __lsx_vsrari_w(v4i32 _1){return __builtin_lsx_vsrari_w(_1, 1);}
+v2i64 __lsx_vsrari_d(v2i64 _1){return __builtin_lsx_vsrari_d(_1, 1);}
+v16i8 __lsx_vsrl_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrl_b(_1, _2);}
+v8i16 __lsx_vsrl_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrl_h(_1, _2);}
+v4i32 __lsx_vsrl_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrl_w(_1, _2);}
+v2i64 __lsx_vsrl_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrl_d(_1, _2);}
+v16i8 __lsx_vsrli_b(v16i8 _1){return __builtin_lsx_vsrli_b(_1, 1);}
+v8i16 __lsx_vsrli_h(v8i16 _1){return __builtin_lsx_vsrli_h(_1, 1);}
+v4i32 __lsx_vsrli_w(v4i32 _1){return __builtin_lsx_vsrli_w(_1, 1);}
+v2i64 __lsx_vsrli_d(v2i64 _1){return __builtin_lsx_vsrli_d(_1, 1);}
+v16i8 __lsx_vsrlr_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrlr_b(_1, _2);}
+v8i16 __lsx_vsrlr_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrlr_h(_1, _2);}
+v4i32 __lsx_vsrlr_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrlr_w(_1, _2);}
+v2i64 __lsx_vsrlr_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrlr_d(_1, _2);}
+v16i8 __lsx_vsrlri_b(v16i8 _1){return __builtin_lsx_vsrlri_b(_1, 1);}
+v8i16 __lsx_vsrlri_h(v8i16 _1){return __builtin_lsx_vsrlri_h(_1, 1);}
+v4i32 __lsx_vsrlri_w(v4i32 _1){return __builtin_lsx_vsrlri_w(_1, 1);}
+v2i64 __lsx_vsrlri_d(v2i64 _1){return __builtin_lsx_vsrlri_d(_1, 1);}
+v16u8 __lsx_vbitclr_b(v16u8 _1, v16u8 _2){return __builtin_lsx_vbitclr_b(_1, _2);}
+v8u16 __lsx_vbitclr_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vbitclr_h(_1, _2);}
+v4u32 __lsx_vbitclr_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vbitclr_w(_1, _2);}
+v2u64 __lsx_vbitclr_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vbitclr_d(_1, _2);}
+v16u8 __lsx_vbitclri_b(v16u8 _1){return __builtin_lsx_vbitclri_b(_1, 1);}
+v8u16 __lsx_vbitclri_h(v8u16 _1){return __builtin_lsx_vbitclri_h(_1, 1);}
+v4u32 __lsx_vbitclri_w(v4u32 _1){return __builtin_lsx_vbitclri_w(_1, 1);}
+v2u64 __lsx_vbitclri_d(v2u64 _1){return __builtin_lsx_vbitclri_d(_1, 1);}
+v16u8 __lsx_vbitset_b(v16u8 _1, v16u8 _2){return __builtin_lsx_vbitset_b(_1, _2);}
+v8u16 __lsx_vbitset_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vbitset_h(_1, _2);}
+v4u32 __lsx_vbitset_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vbitset_w(_1, _2);}
+v2u64 __lsx_vbitset_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vbitset_d(_1, _2);}
+v16u8 __lsx_vbitseti_b(v16u8 _1){return __builtin_lsx_vbitseti_b(_1, 1);}
+v8u16 __lsx_vbitseti_h(v8u16 _1){return __builtin_lsx_vbitseti_h(_1, 1);}
+v4u32 __lsx_vbitseti_w(v4u32 _1){return __builtin_lsx_vbitseti_w(_1, 1);}
+v2u64 __lsx_vbitseti_d(v2u64 _1){return __builtin_lsx_vbitseti_d(_1, 1);}
+v16u8 __lsx_vbitrev_b(v16u8 _1, v16u8 _2){return __builtin_lsx_vbitrev_b(_1, _2);}
+v8u16 __lsx_vbitrev_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vbitrev_h(_1, _2);}
+v4u32 __lsx_vbitrev_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vbitrev_w(_1, _2);}
+v2u64 __lsx_vbitrev_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vbitrev_d(_1, _2);}
+v16u8 __lsx_vbitrevi_b(v16u8 _1){return __builtin_lsx_vbitrevi_b(_1, 1);}
+v8u16 __lsx_vbitrevi_h(v8u16 _1){return __builtin_lsx_vbitrevi_h(_1, 1);}
+v4u32 __lsx_vbitrevi_w(v4u32 _1){return __builtin_lsx_vbitrevi_w(_1, 1);}
+v2u64 __lsx_vbitrevi_d(v2u64 _1){return __builtin_lsx_vbitrevi_d(_1, 1);}
+v16i8 __lsx_vadd_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vadd_b(_1, _2);}
+v8i16 __lsx_vadd_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vadd_h(_1, _2);}
+v4i32 __lsx_vadd_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vadd_w(_1, _2);}
+v2i64 __lsx_vadd_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vadd_d(_1, _2);}
+v16i8 __lsx_vaddi_bu(v16i8 _1){return __builtin_lsx_vaddi_bu(_1, 1);}
+v8i16 __lsx_vaddi_hu(v8i16 _1){return __builtin_lsx_vaddi_hu(_1, 1);}
+v4i32 __lsx_vaddi_wu(v4i32 _1){return __builtin_lsx_vaddi_wu(_1, 1);}
+v2i64 __lsx_vaddi_du(v2i64 _1){return __builtin_lsx_vaddi_du(_1, 1);}
+v16i8 __lsx_vsub_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsub_b(_1, _2);}
+v8i16 __lsx_vsub_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsub_h(_1, _2);}
+v4i32 __lsx_vsub_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsub_w(_1, _2);}
+v2i64 __lsx_vsub_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsub_d(_1, _2);}
+v16i8 __lsx_vsubi_bu(v16i8 _1){return __builtin_lsx_vsubi_bu(_1, 1);}
+v8i16 __lsx_vsubi_hu(v8i16 _1){return __builtin_lsx_vsubi_hu(_1, 1);}
+v4i32 __lsx_vsubi_wu(v4i32 _1){return __builtin_lsx_vsubi_wu(_1, 1);}
+v2i64 __lsx_vsubi_du(v2i64 _1){return __builtin_lsx_vsubi_du(_1, 1);}
+v16i8 __lsx_vmax_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmax_b(_1, _2);}
+v8i16 __lsx_vmax_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmax_h(_1, _2);}
+v4i32 __lsx_vmax_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmax_w(_1, _2);}
+v2i64 __lsx_vmax_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmax_d(_1, _2);}
+v16i8 __lsx_vmaxi_b(v16i8 _1){return __builtin_lsx_vmaxi_b(_1, 1);}
+v8i16 __lsx_vmaxi_h(v8i16 _1){return __builtin_lsx_vmaxi_h(_1, 1);}
+v4i32 __lsx_vmaxi_w(v4i32 _1){return __builtin_lsx_vmaxi_w(_1, 1);}
+v2i64 __lsx_vmaxi_d(v2i64 _1){return __builtin_lsx_vmaxi_d(_1, 1);}
+v16u8 __lsx_vmax_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vmax_bu(_1, _2);}
+v8u16 __lsx_vmax_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vmax_hu(_1, _2);}
+v4u32 __lsx_vmax_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vmax_wu(_1, _2);}
+v2u64 __lsx_vmax_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vmax_du(_1, _2);}
+v16u8 __lsx_vmaxi_bu(v16u8 _1){return __builtin_lsx_vmaxi_bu(_1, 1);}
+v8u16 __lsx_vmaxi_hu(v8u16 _1){return __builtin_lsx_vmaxi_hu(_1, 1);}
+v4u32 __lsx_vmaxi_wu(v4u32 _1){return __builtin_lsx_vmaxi_wu(_1, 1);}
+v2u64 __lsx_vmaxi_du(v2u64 _1){return __builtin_lsx_vmaxi_du(_1, 1);}
+v16i8 __lsx_vmin_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmin_b(_1, _2);}
+v8i16 __lsx_vmin_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmin_h(_1, _2);}
+v4i32 __lsx_vmin_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmin_w(_1, _2);}
+v2i64 __lsx_vmin_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmin_d(_1, _2);}
+v16i8 __lsx_vmini_b(v16i8 _1){return __builtin_lsx_vmini_b(_1, 1);}
+v8i16 __lsx_vmini_h(v8i16 _1){return __builtin_lsx_vmini_h(_1, 1);}
+v4i32 __lsx_vmini_w(v4i32 _1){return __builtin_lsx_vmini_w(_1, 1);}
+v2i64 __lsx_vmini_d(v2i64 _1){return __builtin_lsx_vmini_d(_1, 1);}
+v16u8 __lsx_vmin_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vmin_bu(_1, _2);}
+v8u16 __lsx_vmin_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vmin_hu(_1, _2);}
+v4u32 __lsx_vmin_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vmin_wu(_1, _2);}
+v2u64 __lsx_vmin_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vmin_du(_1, _2);}
+v16u8 __lsx_vmini_bu(v16u8 _1){return __builtin_lsx_vmini_bu(_1, 1);}
+v8u16 __lsx_vmini_hu(v8u16 _1){return __builtin_lsx_vmini_hu(_1, 1);}
+v4u32 __lsx_vmini_wu(v4u32 _1){return __builtin_lsx_vmini_wu(_1, 1);}
+v2u64 __lsx_vmini_du(v2u64 _1){return __builtin_lsx_vmini_du(_1, 1);}
+v16i8 __lsx_vseq_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vseq_b(_1, _2);}
+v8i16 __lsx_vseq_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vseq_h(_1, _2);}
+v4i32 __lsx_vseq_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vseq_w(_1, _2);}
+v2i64 __lsx_vseq_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vseq_d(_1, _2);}
+v16i8 __lsx_vseqi_b(v16i8 _1){return __builtin_lsx_vseqi_b(_1, 1);}
+v8i16 __lsx_vseqi_h(v8i16 _1){return __builtin_lsx_vseqi_h(_1, 1);}
+v4i32 __lsx_vseqi_w(v4i32 _1){return __builtin_lsx_vseqi_w(_1, 1);}
+v2i64 __lsx_vseqi_d(v2i64 _1){return __builtin_lsx_vseqi_d(_1, 1);}
+v16i8 __lsx_vslti_b(v16i8 _1){return __builtin_lsx_vslti_b(_1, 1);}
+v16i8 __lsx_vslt_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vslt_b(_1, _2);}
+v8i16 __lsx_vslt_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vslt_h(_1, _2);}
+v4i32 __lsx_vslt_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vslt_w(_1, _2);}
+v2i64 __lsx_vslt_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vslt_d(_1, _2);}
+v8i16 __lsx_vslti_h(v8i16 _1){return __builtin_lsx_vslti_h(_1, 1);}
+v4i32 __lsx_vslti_w(v4i32 _1){return __builtin_lsx_vslti_w(_1, 1);}
+v2i64 __lsx_vslti_d(v2i64 _1){return __builtin_lsx_vslti_d(_1, 1);}
+v16i8 __lsx_vslt_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vslt_bu(_1, _2);}
+v8i16 __lsx_vslt_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vslt_hu(_1, _2);}
+v4i32 __lsx_vslt_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vslt_wu(_1, _2);}
+v2i64 __lsx_vslt_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vslt_du(_1, _2);}
+v16i8 __lsx_vslti_bu(v16u8 _1){return __builtin_lsx_vslti_bu(_1, 1);}
+v8i16 __lsx_vslti_hu(v8u16 _1){return __builtin_lsx_vslti_hu(_1, 1);}
+v4i32 __lsx_vslti_wu(v4u32 _1){return __builtin_lsx_vslti_wu(_1, 1);}
+v2i64 __lsx_vslti_du(v2u64 _1){return __builtin_lsx_vslti_du(_1, 1);}
+v16i8 __lsx_vsle_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsle_b(_1, _2);}
+v8i16 __lsx_vsle_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsle_h(_1, _2);}
+v4i32 __lsx_vsle_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsle_w(_1, _2);}
+v2i64 __lsx_vsle_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsle_d(_1, _2);}
+v16i8 __lsx_vslei_b(v16i8 _1){return __builtin_lsx_vslei_b(_1, 1);}
+v8i16 __lsx_vslei_h(v8i16 _1){return __builtin_lsx_vslei_h(_1, 1);}
+v4i32 __lsx_vslei_w(v4i32 _1){return __builtin_lsx_vslei_w(_1, 1);}
+v2i64 __lsx_vslei_d(v2i64 _1){return __builtin_lsx_vslei_d(_1, 1);}
+v16i8 __lsx_vsle_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vsle_bu(_1, _2);}
+v8i16 __lsx_vsle_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vsle_hu(_1, _2);}
+v4i32 __lsx_vsle_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vsle_wu(_1, _2);}
+v2i64 __lsx_vsle_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vsle_du(_1, _2);}
+v16i8 __lsx_vslei_bu(v16u8 _1){return __builtin_lsx_vslei_bu(_1, 1);}
+v8i16 __lsx_vslei_hu(v8u16 _1){return __builtin_lsx_vslei_hu(_1, 1);}
+v4i32 __lsx_vslei_wu(v4u32 _1){return __builtin_lsx_vslei_wu(_1, 1);}
+v2i64 __lsx_vslei_du(v2u64 _1){return __builtin_lsx_vslei_du(_1, 1);}
+v16i8 __lsx_vsat_b(v16i8 _1){return __builtin_lsx_vsat_b(_1, 1);}
+v8i16 __lsx_vsat_h(v8i16 _1){return __builtin_lsx_vsat_h(_1, 1);}
+v4i32 __lsx_vsat_w(v4i32 _1){return __builtin_lsx_vsat_w(_1, 1);}
+v2i64 __lsx_vsat_d(v2i64 _1){return __builtin_lsx_vsat_d(_1, 1);}
+v16u8 __lsx_vsat_bu(v16u8 _1){return __builtin_lsx_vsat_bu(_1, 1);}
+v8u16 __lsx_vsat_hu(v8u16 _1){return __builtin_lsx_vsat_hu(_1, 1);}
+v4u32 __lsx_vsat_wu(v4u32 _1){return __builtin_lsx_vsat_wu(_1, 1);}
+v2u64 __lsx_vsat_du(v2u64 _1){return __builtin_lsx_vsat_du(_1, 1);}
+v16i8 __lsx_vadda_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vadda_b(_1, _2);}
+v8i16 __lsx_vadda_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vadda_h(_1, _2);}
+v4i32 __lsx_vadda_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vadda_w(_1, _2);}
+v2i64 __lsx_vadda_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vadda_d(_1, _2);}
+v16i8 __lsx_vsadd_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsadd_b(_1, _2);}
+v8i16 __lsx_vsadd_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsadd_h(_1, _2);}
+v4i32 __lsx_vsadd_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsadd_w(_1, _2);}
+v2i64 __lsx_vsadd_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsadd_d(_1, _2);}
+v16u8 __lsx_vsadd_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vsadd_bu(_1, _2);}
+v8u16 __lsx_vsadd_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vsadd_hu(_1, _2);}
+v4u32 __lsx_vsadd_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vsadd_wu(_1, _2);}
+v2u64 __lsx_vsadd_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vsadd_du(_1, _2);}
+v16i8 __lsx_vavg_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vavg_b(_1, _2);}
+v8i16 __lsx_vavg_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vavg_h(_1, _2);}
+v4i32 __lsx_vavg_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vavg_w(_1, _2);}
+v2i64 __lsx_vavg_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vavg_d(_1, _2);}
+v16u8 __lsx_vavg_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vavg_bu(_1, _2);}
+v8u16 __lsx_vavg_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vavg_hu(_1, _2);}
+v4u32 __lsx_vavg_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vavg_wu(_1, _2);}
+v2u64 __lsx_vavg_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vavg_du(_1, _2);}
+v16i8 __lsx_vavgr_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vavgr_b(_1, _2);}
+v8i16 __lsx_vavgr_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vavgr_h(_1, _2);}
+v4i32 __lsx_vavgr_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vavgr_w(_1, _2);}
+v2i64 __lsx_vavgr_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vavgr_d(_1, _2);}
+v16u8 __lsx_vavgr_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vavgr_bu(_1, _2);}
+v8u16 __lsx_vavgr_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vavgr_hu(_1, _2);}
+v4u32 __lsx_vavgr_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vavgr_wu(_1, _2);}
+v2u64 __lsx_vavgr_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vavgr_du(_1, _2);}
+v16i8 __lsx_vssub_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vssub_b(_1, _2);}
+v8i16 __lsx_vssub_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vssub_h(_1, _2);}
+v4i32 __lsx_vssub_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vssub_w(_1, _2);}
+v2i64 __lsx_vssub_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vssub_d(_1, _2);}
+v16u8 __lsx_vssub_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vssub_bu(_1, _2);}
+v8u16 __lsx_vssub_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vssub_hu(_1, _2);}
+v4u32 __lsx_vssub_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vssub_wu(_1, _2);}
+v2u64 __lsx_vssub_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vssub_du(_1, _2);}
+v16i8 __lsx_vabsd_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vabsd_b(_1, _2);}
+v8i16 __lsx_vabsd_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vabsd_h(_1, _2);}
+v4i32 __lsx_vabsd_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vabsd_w(_1, _2);}
+v2i64 __lsx_vabsd_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vabsd_d(_1, _2);}
+v16u8 __lsx_vabsd_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vabsd_bu(_1, _2);}
+v8u16 __lsx_vabsd_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vabsd_hu(_1, _2);}
+v4u32 __lsx_vabsd_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vabsd_wu(_1, _2);}
+v2u64 __lsx_vabsd_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vabsd_du(_1, _2);}
+v16i8 __lsx_vmul_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmul_b(_1, _2);}
+v8i16 __lsx_vmul_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmul_h(_1, _2);}
+v4i32 __lsx_vmul_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmul_w(_1, _2);}
+v2i64 __lsx_vmul_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmul_d(_1, _2);}
+v16i8 __lsx_vmadd_b(v16i8 _1, v16i8 _2, v16i8 _3){return __builtin_lsx_vmadd_b(_1, _2, _3);}
+v8i16 __lsx_vmadd_h(v8i16 _1, v8i16 _2, v8i16 _3){return __builtin_lsx_vmadd_h(_1, _2, _3);}
+v4i32 __lsx_vmadd_w(v4i32 _1, v4i32 _2, v4i32 _3){return __builtin_lsx_vmadd_w(_1, _2, _3);}
+v2i64 __lsx_vmadd_d(v2i64 _1, v2i64 _2, v2i64 _3){return __builtin_lsx_vmadd_d(_1, _2, _3);}
+v16i8 __lsx_vmsub_b(v16i8 _1, v16i8 _2, v16i8 _3){return __builtin_lsx_vmsub_b(_1, _2, _3);}
+v8i16 __lsx_vmsub_h(v8i16 _1, v8i16 _2, v8i16 _3){return __builtin_lsx_vmsub_h(_1, _2, _3);}
+v4i32 __lsx_vmsub_w(v4i32 _1, v4i32 _2, v4i32 _3){return __builtin_lsx_vmsub_w(_1, _2, _3);}
+v2i64 __lsx_vmsub_d(v2i64 _1, v2i64 _2, v2i64 _3){return __builtin_lsx_vmsub_d(_1, _2, _3);}
+v16i8 __lsx_vdiv_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vdiv_b(_1, _2);}
+v8i16 __lsx_vdiv_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vdiv_h(_1, _2);}
+v4i32 __lsx_vdiv_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vdiv_w(_1, _2);}
+v2i64 __lsx_vdiv_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vdiv_d(_1, _2);}
+v16u8 __lsx_vdiv_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vdiv_bu(_1, _2);}
+v8u16 __lsx_vdiv_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vdiv_hu(_1, _2);}
+v4u32 __lsx_vdiv_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vdiv_wu(_1, _2);}
+v2u64 __lsx_vdiv_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vdiv_du(_1, _2);}
+v8i16 __lsx_vhaddw_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vhaddw_h_b(_1, _2);}
+v4i32 __lsx_vhaddw_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vhaddw_w_h(_1, _2);}
+v2i64 __lsx_vhaddw_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vhaddw_d_w(_1, _2);}
+v8u16 __lsx_vhaddw_hu_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vhaddw_hu_bu(_1, _2);}
+v4u32 __lsx_vhaddw_wu_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vhaddw_wu_hu(_1, _2);}
+v2u64 __lsx_vhaddw_du_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vhaddw_du_wu(_1, _2);}
+v8i16 __lsx_vhsubw_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vhsubw_h_b(_1, _2);}
+v4i32 __lsx_vhsubw_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vhsubw_w_h(_1, _2);}
+v2i64 __lsx_vhsubw_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vhsubw_d_w(_1, _2);}
+v8i16 __lsx_vhsubw_hu_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vhsubw_hu_bu(_1, _2);}
+v4i32 __lsx_vhsubw_wu_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vhsubw_wu_hu(_1, _2);}
+v2i64 __lsx_vhsubw_du_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vhsubw_du_wu(_1, _2);}
+v16i8 __lsx_vmod_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmod_b(_1, _2);}
+v8i16 __lsx_vmod_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmod_h(_1, _2);}
+v4i32 __lsx_vmod_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmod_w(_1, _2);}
+v2i64 __lsx_vmod_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmod_d(_1, _2);}
+v16u8 __lsx_vmod_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vmod_bu(_1, _2);}
+v8u16 __lsx_vmod_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vmod_hu(_1, _2);}
+v4u32 __lsx_vmod_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vmod_wu(_1, _2);}
+v2u64 __lsx_vmod_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vmod_du(_1, _2);}
+v16i8 __lsx_vreplve_b(v16i8 _1, int _2){return __builtin_lsx_vreplve_b(_1, _2);}
+v8i16 __lsx_vreplve_h(v8i16 _1, int _2){return __builtin_lsx_vreplve_h(_1, _2);}
+v4i32 __lsx_vreplve_w(v4i32 _1, int _2){return __builtin_lsx_vreplve_w(_1, _2);}
+v2i64 __lsx_vreplve_d(v2i64 _1, int _2){return __builtin_lsx_vreplve_d(_1, _2);}
+v16i8 __lsx_vreplvei_b(v16i8 _1){return __builtin_lsx_vreplvei_b(_1, 1);}
+v8i16 __lsx_vreplvei_h(v8i16 _1){return __builtin_lsx_vreplvei_h(_1, 1);}
+v4i32 __lsx_vreplvei_w(v4i32 _1){return __builtin_lsx_vreplvei_w(_1, 1);}
+v2i64 __lsx_vreplvei_d(v2i64 _1){return __builtin_lsx_vreplvei_d(_1, 1);}
+v16i8 __lsx_vpickev_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vpickev_b(_1, _2);}
+v8i16 __lsx_vpickev_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vpickev_h(_1, _2);}
+v4i32 __lsx_vpickev_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vpickev_w(_1, _2);}
+v2i64 __lsx_vpickev_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vpickev_d(_1, _2);}
+v16i8 __lsx_vpickod_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vpickod_b(_1, _2);}
+v8i16 __lsx_vpickod_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vpickod_h(_1, _2);}
+v4i32 __lsx_vpickod_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vpickod_w(_1, _2);}
+v2i64 __lsx_vpickod_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vpickod_d(_1, _2);}
+v16i8 __lsx_vilvh_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vilvh_b(_1, _2);}
+v8i16 __lsx_vilvh_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vilvh_h(_1, _2);}
+v4i32 __lsx_vilvh_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vilvh_w(_1, _2);}
+v2i64 __lsx_vilvh_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vilvh_d(_1, _2);}
+v16i8 __lsx_vilvl_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vilvl_b(_1, _2);}
+v8i16 __lsx_vilvl_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vilvl_h(_1, _2);}
+v4i32 __lsx_vilvl_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vilvl_w(_1, _2);}
+v2i64 __lsx_vilvl_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vilvl_d(_1, _2);}
+v16i8 __lsx_vpackev_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vpackev_b(_1, _2);}
+v8i16 __lsx_vpackev_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vpackev_h(_1, _2);}
+v4i32 __lsx_vpackev_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vpackev_w(_1, _2);}
+v2i64 __lsx_vpackev_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vpackev_d(_1, _2);}
+v16i8 __lsx_vpackod_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vpackod_b(_1, _2);}
+v8i16 __lsx_vpackod_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vpackod_h(_1, _2);}
+v4i32 __lsx_vpackod_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vpackod_w(_1, _2);}
+v2i64 __lsx_vpackod_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vpackod_d(_1, _2);}
+v8i16 __lsx_vshuf_h(v8i16 _1, v8i16 _2, v8i16 _3){return __builtin_lsx_vshuf_h(_1, _2, _3);}
+v4i32 __lsx_vshuf_w(v4i32 _1, v4i32 _2, v4i32 _3){return __builtin_lsx_vshuf_w(_1, _2, _3);}
+v2i64 __lsx_vshuf_d(v2i64 _1, v2i64 _2, v2i64 _3){return __builtin_lsx_vshuf_d(_1, _2, _3);}
+v16u8 __lsx_vand_v(v16u8 _1, v16u8 _2){return __builtin_lsx_vand_v(_1, _2);}
+v16u8 __lsx_vandi_b(v16u8 _1){return __builtin_lsx_vandi_b(_1, 1);}
+v16u8 __lsx_vor_v(v16u8 _1, v16u8 _2){return __builtin_lsx_vor_v(_1, _2);}
+v16u8 __lsx_vori_b(v16u8 _1){return __builtin_lsx_vori_b(_1, 1);}
+v16u8 __lsx_vnor_v(v16u8 _1, v16u8 _2){return __builtin_lsx_vnor_v(_1, _2);}
+v16u8 __lsx_vnori_b(v16u8 _1){return __builtin_lsx_vnori_b(_1, 1);}
+v16u8 __lsx_vxor_v(v16u8 _1, v16u8 _2){return __builtin_lsx_vxor_v(_1, _2);}
+v16u8 __lsx_vxori_b(v16u8 _1){return __builtin_lsx_vxori_b(_1, 1);}
+v16u8 __lsx_vbitsel_v(v16u8 _1, v16u8 _2, v16u8 _3){return __builtin_lsx_vbitsel_v(_1, _2, _3);}
+v16u8 __lsx_vbitseli_b(v16u8 _1, v16u8 _2){return __builtin_lsx_vbitseli_b(_1, _2, 1);}
+v16i8 __lsx_vshuf4i_b(v16i8 _1){return __builtin_lsx_vshuf4i_b(_1, 1);}
+v8i16 __lsx_vshuf4i_h(v8i16 _1){return __builtin_lsx_vshuf4i_h(_1, 1);}
+v4i32 __lsx_vshuf4i_w(v4i32 _1){return __builtin_lsx_vshuf4i_w(_1, 1);}
+v16i8 __lsx_vreplgr2vr_b(int _1){return __builtin_lsx_vreplgr2vr_b(_1);}
+v8i16 __lsx_vreplgr2vr_h(int _1){return __builtin_lsx_vreplgr2vr_h(_1);}
+v4i32 __lsx_vreplgr2vr_w(int _1){return __builtin_lsx_vreplgr2vr_w(_1);}
+v2i64 __lsx_vreplgr2vr_d(long _1){return __builtin_lsx_vreplgr2vr_d(_1);}
+v16i8 __lsx_vpcnt_b(v16i8 _1){return __builtin_lsx_vpcnt_b(_1);}
+v8i16 __lsx_vpcnt_h(v8i16 _1){return __builtin_lsx_vpcnt_h(_1);}
+v4i32 __lsx_vpcnt_w(v4i32 _1){return __builtin_lsx_vpcnt_w(_1);}
+v2i64 __lsx_vpcnt_d(v2i64 _1){return __builtin_lsx_vpcnt_d(_1);}
+v16i8 __lsx_vclo_b(v16i8 _1){return __builtin_lsx_vclo_b(_1);}
+v8i16 __lsx_vclo_h(v8i16 _1){return __builtin_lsx_vclo_h(_1);}
+v4i32 __lsx_vclo_w(v4i32 _1){return __builtin_lsx_vclo_w(_1);}
+v2i64 __lsx_vclo_d(v2i64 _1){return __builtin_lsx_vclo_d(_1);}
+v16i8 __lsx_vclz_b(v16i8 _1){return __builtin_lsx_vclz_b(_1);}
+v8i16 __lsx_vclz_h(v8i16 _1){return __builtin_lsx_vclz_h(_1);}
+v4i32 __lsx_vclz_w(v4i32 _1){return __builtin_lsx_vclz_w(_1);}
+v2i64 __lsx_vclz_d(v2i64 _1){return __builtin_lsx_vclz_d(_1);}
+int __lsx_vpickve2gr_b(v16i8 _1){return __builtin_lsx_vpickve2gr_b(_1, 1);}
+int __lsx_vpickve2gr_h(v8i16 _1){return __builtin_lsx_vpickve2gr_h(_1, 1);}
+int __lsx_vpickve2gr_w(v4i32 _1){return __builtin_lsx_vpickve2gr_w(_1, 1);}
+long __lsx_vpickve2gr_d(v2i64 _1){return __builtin_lsx_vpickve2gr_d(_1, 1);}
+unsigned int __lsx_vpickve2gr_bu(v16i8 _1){return __builtin_lsx_vpickve2gr_bu(_1, 1);}
+unsigned int __lsx_vpickve2gr_hu(v8i16 _1){return __builtin_lsx_vpickve2gr_hu(_1, 1);}
+unsigned int __lsx_vpickve2gr_wu(v4i32 _1){return __builtin_lsx_vpickve2gr_wu(_1, 1);}
+unsigned long int __lsx_vpickve2gr_du(v2i64 _1){return __builtin_lsx_vpickve2gr_du(_1, 1);}
+v16i8 __lsx_vinsgr2vr_b(v16i8 _1){return __builtin_lsx_vinsgr2vr_b(_1, 1, 1);}
+v8i16 __lsx_vinsgr2vr_h(v8i16 _1){return __builtin_lsx_vinsgr2vr_h(_1, 1, 1);}
+v4i32 __lsx_vinsgr2vr_w(v4i32 _1){return __builtin_lsx_vinsgr2vr_w(_1, 1, 1);}
+v2i64 __lsx_vinsgr2vr_d(v2i64 _1){return __builtin_lsx_vinsgr2vr_d(_1, 1, 1);}
+v4f32 __lsx_vfadd_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfadd_s(_1, _2);}
+v2f64 __lsx_vfadd_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfadd_d(_1, _2);}
+v4f32 __lsx_vfsub_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfsub_s(_1, _2);}
+v2f64 __lsx_vfsub_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfsub_d(_1, _2);}
+v4f32 __lsx_vfmul_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfmul_s(_1, _2);}
+v2f64 __lsx_vfmul_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfmul_d(_1, _2);}
+v4f32 __lsx_vfdiv_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfdiv_s(_1, _2);}
+v2f64 __lsx_vfdiv_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfdiv_d(_1, _2);}
+v8i16 __lsx_vfcvt_h_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcvt_h_s(_1, _2);}
+v4f32 __lsx_vfcvt_s_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcvt_s_d(_1, _2);}
+v4f32 __lsx_vfmin_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfmin_s(_1, _2);}
+v2f64 __lsx_vfmin_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfmin_d(_1, _2);}
+v4f32 __lsx_vfmina_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfmina_s(_1, _2);}
+v2f64 __lsx_vfmina_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfmina_d(_1, _2);}
+v4f32 __lsx_vfmax_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfmax_s(_1, _2);}
+v2f64 __lsx_vfmax_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfmax_d(_1, _2);}
+v4f32 __lsx_vfmaxa_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfmaxa_s(_1, _2);}
+v2f64 __lsx_vfmaxa_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfmaxa_d(_1, _2);}
+v4i32 __lsx_vfclass_s(v4f32 _1){return __builtin_lsx_vfclass_s(_1);}
+v2i64 __lsx_vfclass_d(v2f64 _1){return __builtin_lsx_vfclass_d(_1);}
+v4f32 __lsx_vfsqrt_s(v4f32 _1){return __builtin_lsx_vfsqrt_s(_1);}
+v2f64 __lsx_vfsqrt_d(v2f64 _1){return __builtin_lsx_vfsqrt_d(_1);}
+v4f32 __lsx_vfrecip_s(v4f32 _1){return __builtin_lsx_vfrecip_s(_1);}
+v2f64 __lsx_vfrecip_d(v2f64 _1){return __builtin_lsx_vfrecip_d(_1);}
+v4f32 __lsx_vfrint_s(v4f32 _1){return __builtin_lsx_vfrint_s(_1);}
+v2f64 __lsx_vfrint_d(v2f64 _1){return __builtin_lsx_vfrint_d(_1);}
+v4f32 __lsx_vfrsqrt_s(v4f32 _1){return __builtin_lsx_vfrsqrt_s(_1);}
+v2f64 __lsx_vfrsqrt_d(v2f64 _1){return __builtin_lsx_vfrsqrt_d(_1);}
+v4f32 __lsx_vflogb_s(v4f32 _1){return __builtin_lsx_vflogb_s(_1);}
+v2f64 __lsx_vflogb_d(v2f64 _1){return __builtin_lsx_vflogb_d(_1);}
+v4f32 __lsx_vfcvth_s_h(v8i16 _1){return __builtin_lsx_vfcvth_s_h(_1);}
+v2f64 __lsx_vfcvth_d_s(v4f32 _1){return __builtin_lsx_vfcvth_d_s(_1);}
+v4f32 __lsx_vfcvtl_s_h(v8i16 _1){return __builtin_lsx_vfcvtl_s_h(_1);}
+v2f64 __lsx_vfcvtl_d_s(v4f32 _1){return __builtin_lsx_vfcvtl_d_s(_1);}
+v4i32 __lsx_vftint_w_s(v4f32 _1){return __builtin_lsx_vftint_w_s(_1);}
+v2i64 __lsx_vftint_l_d(v2f64 _1){return __builtin_lsx_vftint_l_d(_1);}
+v4u32 __lsx_vftint_wu_s(v4f32 _1){return __builtin_lsx_vftint_wu_s(_1);}
+v2u64 __lsx_vftint_lu_d(v2f64 _1){return __builtin_lsx_vftint_lu_d(_1);}
+v4i32 __lsx_vftintrz_w_s(v4f32 _1){return __builtin_lsx_vftintrz_w_s(_1);}
+v2i64 __lsx_vftintrz_l_d(v2f64 _1){return __builtin_lsx_vftintrz_l_d(_1);}
+v4u32 __lsx_vftintrz_wu_s(v4f32 _1){return __builtin_lsx_vftintrz_wu_s(_1);}
+v2u64 __lsx_vftintrz_lu_d(v2f64 _1){return __builtin_lsx_vftintrz_lu_d(_1);}
+v4f32 __lsx_vffint_s_w(v4i32 _1){return __builtin_lsx_vffint_s_w(_1);}
+v2f64 __lsx_vffint_d_l(v2i64 _1){return __builtin_lsx_vffint_d_l(_1);}
+v4f32 __lsx_vffint_s_wu(v4u32 _1){return __builtin_lsx_vffint_s_wu(_1);}
+v2f64 __lsx_vffint_d_lu(v2u64 _1){return __builtin_lsx_vffint_d_lu(_1);}
+v16u8 __lsx_vandn_v(v16u8 _1, v16u8 _2){return __builtin_lsx_vandn_v(_1, _2);}
+v16i8 __lsx_vneg_b(v16i8 _1){return __builtin_lsx_vneg_b(_1);}
+v8i16 __lsx_vneg_h(v8i16 _1){return __builtin_lsx_vneg_h(_1);}
+v4i32 __lsx_vneg_w(v4i32 _1){return __builtin_lsx_vneg_w(_1);}
+v2i64 __lsx_vneg_d(v2i64 _1){return __builtin_lsx_vneg_d(_1);}
+v16i8 __lsx_vmuh_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmuh_b(_1, _2);}
+v8i16 __lsx_vmuh_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmuh_h(_1, _2);}
+v4i32 __lsx_vmuh_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmuh_w(_1, _2);}
+v2i64 __lsx_vmuh_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmuh_d(_1, _2);}
+v16u8 __lsx_vmuh_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vmuh_bu(_1, _2);}
+v8u16 __lsx_vmuh_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vmuh_hu(_1, _2);}
+v4u32 __lsx_vmuh_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vmuh_wu(_1, _2);}
+v2u64 __lsx_vmuh_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vmuh_du(_1, _2);}
+v8i16 __lsx_vsllwil_h_b(v16i8 _1){return __builtin_lsx_vsllwil_h_b(_1, 1);}
+v4i32 __lsx_vsllwil_w_h(v8i16 _1){return __builtin_lsx_vsllwil_w_h(_1, 1);}
+v2i64 __lsx_vsllwil_d_w(v4i32 _1){return __builtin_lsx_vsllwil_d_w(_1, 1);}
+v8u16 __lsx_vsllwil_hu_bu(v16u8 _1){return __builtin_lsx_vsllwil_hu_bu(_1, 1);}
+v4u32 __lsx_vsllwil_wu_hu(v8u16 _1){return __builtin_lsx_vsllwil_wu_hu(_1, 1);}
+v2u64 __lsx_vsllwil_du_wu(v4u32 _1){return __builtin_lsx_vsllwil_du_wu(_1, 1);}
+v16i8 __lsx_vsran_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsran_b_h(_1, _2);}
+v8i16 __lsx_vsran_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsran_h_w(_1, _2);}
+v4i32 __lsx_vsran_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsran_w_d(_1, _2);}
+v16i8 __lsx_vssran_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vssran_b_h(_1, _2);}
+v8i16 __lsx_vssran_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vssran_h_w(_1, _2);}
+v4i32 __lsx_vssran_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vssran_w_d(_1, _2);}
+v16u8 __lsx_vssran_bu_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vssran_bu_h(_1, _2);}
+v8u16 __lsx_vssran_hu_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vssran_hu_w(_1, _2);}
+v4u32 __lsx_vssran_wu_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vssran_wu_d(_1, _2);}
+v16i8 __lsx_vsrarn_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrarn_b_h(_1, _2);}
+v8i16 __lsx_vsrarn_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrarn_h_w(_1, _2);}
+v4i32 __lsx_vsrarn_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrarn_w_d(_1, _2);}
+v16i8 __lsx_vssrarn_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrarn_b_h(_1, _2);}
+v8i16 __lsx_vssrarn_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrarn_h_w(_1, _2);}
+v4i32 __lsx_vssrarn_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrarn_w_d(_1, _2);}
+v16u8 __lsx_vssrarn_bu_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vssrarn_bu_h(_1, _2);}
+v8u16 __lsx_vssrarn_hu_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vssrarn_hu_w(_1, _2);}
+v4u32 __lsx_vssrarn_wu_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vssrarn_wu_d(_1, _2);}
+v16i8 __lsx_vsrln_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrln_b_h(_1, _2);}
+v8i16 __lsx_vsrln_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrln_h_w(_1, _2);}
+v4i32 __lsx_vsrln_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrln_w_d(_1, _2);}
+v16u8 __lsx_vssrln_bu_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vssrln_bu_h(_1, _2);}
+v8u16 __lsx_vssrln_hu_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vssrln_hu_w(_1, _2);}
+v4u32 __lsx_vssrln_wu_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vssrln_wu_d(_1, _2);}
+v16i8 __lsx_vsrlrn_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrlrn_b_h(_1, _2);}
+v8i16 __lsx_vsrlrn_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrlrn_h_w(_1, _2);}
+v4i32 __lsx_vsrlrn_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrlrn_w_d(_1, _2);}
+v16u8 __lsx_vssrlrn_bu_h(v8u16 _1, v8u16 _2){return __builtin_lsx_vssrlrn_bu_h(_1, _2);}
+v8u16 __lsx_vssrlrn_hu_w(v4u32 _1, v4u32 _2){return __builtin_lsx_vssrlrn_hu_w(_1, _2);}
+v4u32 __lsx_vssrlrn_wu_d(v2u64 _1, v2u64 _2){return __builtin_lsx_vssrlrn_wu_d(_1, _2);}
+v16i8 __lsx_vfrstpi_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vfrstpi_b(_1, _2, 1);}
+v8i16 __lsx_vfrstpi_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vfrstpi_h(_1, _2, 1);}
+v16i8 __lsx_vfrstp_b(v16i8 _1, v16i8 _2, v16i8 _3){return __builtin_lsx_vfrstp_b(_1, _2, _3);}
+v8i16 __lsx_vfrstp_h(v8i16 _1, v8i16 _2, v8i16 _3){return __builtin_lsx_vfrstp_h(_1, _2, _3);}
+v2i64 __lsx_vshuf4i_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vshuf4i_d(_1, _2, 1);}
+v16i8 __lsx_vbsrl_v(v16i8 _1){return __builtin_lsx_vbsrl_v(_1, 1);}
+v16i8 __lsx_vbsll_v(v16i8 _1){return __builtin_lsx_vbsll_v(_1, 1);}
+v16i8 __lsx_vextrins_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vextrins_b(_1, _2, 1);}
+v8i16 __lsx_vextrins_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vextrins_h(_1, _2, 1);}
+v4i32 __lsx_vextrins_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vextrins_w(_1, _2, 1);}
+v2i64 __lsx_vextrins_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vextrins_d(_1, _2, 1);}
+v16i8 __lsx_vmskltz_b(v16i8 _1){return __builtin_lsx_vmskltz_b(_1);}
+v8i16 __lsx_vmskltz_h(v8i16 _1){return __builtin_lsx_vmskltz_h(_1);}
+v4i32 __lsx_vmskltz_w(v4i32 _1){return __builtin_lsx_vmskltz_w(_1);}
+v2i64 __lsx_vmskltz_d(v2i64 _1){return __builtin_lsx_vmskltz_d(_1);}
+v16i8 __lsx_vsigncov_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsigncov_b(_1, _2);}
+v8i16 __lsx_vsigncov_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsigncov_h(_1, _2);}
+v4i32 __lsx_vsigncov_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsigncov_w(_1, _2);}
+v2i64 __lsx_vsigncov_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsigncov_d(_1, _2);}
+v4f32 __lsx_vfmadd_s(v4f32 _1, v4f32 _2, v4f32 _3){return __builtin_lsx_vfmadd_s(_1, _2, _3);}
+v2f64 __lsx_vfmadd_d(v2f64 _1, v2f64 _2, v2f64 _3){return __builtin_lsx_vfmadd_d(_1, _2, _3);}
+v4f32 __lsx_vfmsub_s(v4f32 _1, v4f32 _2, v4f32 _3){return __builtin_lsx_vfmsub_s(_1, _2, _3);}
+v2f64 __lsx_vfmsub_d(v2f64 _1, v2f64 _2, v2f64 _3){return __builtin_lsx_vfmsub_d(_1, _2, _3);}
+v4f32 __lsx_vfnmadd_s(v4f32 _1, v4f32 _2, v4f32 _3){return __builtin_lsx_vfnmadd_s(_1, _2, _3);}
+v2f64 __lsx_vfnmadd_d(v2f64 _1, v2f64 _2, v2f64 _3){return __builtin_lsx_vfnmadd_d(_1, _2, _3);}
+v4f32 __lsx_vfnmsub_s(v4f32 _1, v4f32 _2, v4f32 _3){return __builtin_lsx_vfnmsub_s(_1, _2, _3);}
+v2f64 __lsx_vfnmsub_d(v2f64 _1, v2f64 _2, v2f64 _3){return __builtin_lsx_vfnmsub_d(_1, _2, _3);}
+v4i32 __lsx_vftintrne_w_s(v4f32 _1){return __builtin_lsx_vftintrne_w_s(_1);}
+v2i64 __lsx_vftintrne_l_d(v2f64 _1){return __builtin_lsx_vftintrne_l_d(_1);}
+v4i32 __lsx_vftintrp_w_s(v4f32 _1){return __builtin_lsx_vftintrp_w_s(_1);}
+v2i64 __lsx_vftintrp_l_d(v2f64 _1){return __builtin_lsx_vftintrp_l_d(_1);}
+v4i32 __lsx_vftintrm_w_s(v4f32 _1){return __builtin_lsx_vftintrm_w_s(_1);}
+v2i64 __lsx_vftintrm_l_d(v2f64 _1){return __builtin_lsx_vftintrm_l_d(_1);}
+v4i32 __lsx_vftint_w_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vftint_w_d(_1, _2);}
+v4f32 __lsx_vffint_s_l(v2i64 _1, v2i64 _2){return __builtin_lsx_vffint_s_l(_1, _2);}
+v4i32 __lsx_vftintrz_w_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vftintrz_w_d(_1, _2);}
+v4i32 __lsx_vftintrp_w_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vftintrp_w_d(_1, _2);}
+v4i32 __lsx_vftintrm_w_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vftintrm_w_d(_1, _2);}
+v4i32 __lsx_vftintrne_w_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vftintrne_w_d(_1, _2);}
+v2i64 __lsx_vftintl_l_s(v4f32 _1){return __builtin_lsx_vftintl_l_s(_1);}
+v2i64 __lsx_vftinth_l_s(v4f32 _1){return __builtin_lsx_vftinth_l_s(_1);}
+v2f64 __lsx_vffinth_d_w(v4i32 _1){return __builtin_lsx_vffinth_d_w(_1);}
+v2f64 __lsx_vffintl_d_w(v4i32 _1){return __builtin_lsx_vffintl_d_w(_1);}
+v2i64 __lsx_vftintrzl_l_s(v4f32 _1){return __builtin_lsx_vftintrzl_l_s(_1);}
+v2i64 __lsx_vftintrzh_l_s(v4f32 _1){return __builtin_lsx_vftintrzh_l_s(_1);}
+v2i64 __lsx_vftintrpl_l_s(v4f32 _1){return __builtin_lsx_vftintrpl_l_s(_1);}
+v2i64 __lsx_vftintrph_l_s(v4f32 _1){return __builtin_lsx_vftintrph_l_s(_1);}
+v2i64 __lsx_vftintrml_l_s(v4f32 _1){return __builtin_lsx_vftintrml_l_s(_1);}
+v2i64 __lsx_vftintrmh_l_s(v4f32 _1){return __builtin_lsx_vftintrmh_l_s(_1);}
+v2i64 __lsx_vftintrnel_l_s(v4f32 _1){return __builtin_lsx_vftintrnel_l_s(_1);}
+v2i64 __lsx_vftintrneh_l_s(v4f32 _1){return __builtin_lsx_vftintrneh_l_s(_1);}
+v4f32 __lsx_vfrintrne_s(v4f32 _1){return __builtin_lsx_vfrintrne_s(_1);}
+v2f64 __lsx_vfrintrne_d(v2f64 _1){return __builtin_lsx_vfrintrne_d(_1);}
+v4f32 __lsx_vfrintrz_s(v4f32 _1){return __builtin_lsx_vfrintrz_s(_1);}
+v2f64 __lsx_vfrintrz_d(v2f64 _1){return __builtin_lsx_vfrintrz_d(_1);}
+v4f32 __lsx_vfrintrp_s(v4f32 _1){return __builtin_lsx_vfrintrp_s(_1);}
+v2f64 __lsx_vfrintrp_d(v2f64 _1){return __builtin_lsx_vfrintrp_d(_1);}
+v4f32 __lsx_vfrintrm_s(v4f32 _1){return __builtin_lsx_vfrintrm_s(_1);}
+v2f64 __lsx_vfrintrm_d(v2f64 _1){return __builtin_lsx_vfrintrm_d(_1);}
+void __lsx_vstelm_b(v16i8 _1, void * _2){return __builtin_lsx_vstelm_b(_1, _2, 1, 1);}
+void __lsx_vstelm_h(v8i16 _1, void * _2){return __builtin_lsx_vstelm_h(_1, _2, 2, 1);}
+void __lsx_vstelm_w(v4i32 _1, void * _2){return __builtin_lsx_vstelm_w(_1, _2, 4, 1);}
+void __lsx_vstelm_d(v2i64 _1, void * _2){return __builtin_lsx_vstelm_d(_1, _2, 8, 1);}
+v2i64 __lsx_vaddwev_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vaddwev_d_w(_1, _2);}
+v4i32 __lsx_vaddwev_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vaddwev_w_h(_1, _2);}
+v8i16 __lsx_vaddwev_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vaddwev_h_b(_1, _2);}
+v2i64 __lsx_vaddwod_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vaddwod_d_w(_1, _2);}
+v4i32 __lsx_vaddwod_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vaddwod_w_h(_1, _2);}
+v8i16 __lsx_vaddwod_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vaddwod_h_b(_1, _2);}
+v2i64 __lsx_vaddwev_d_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vaddwev_d_wu(_1, _2);}
+v4i32 __lsx_vaddwev_w_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vaddwev_w_hu(_1, _2);}
+v8i16 __lsx_vaddwev_h_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vaddwev_h_bu(_1, _2);}
+v2i64 __lsx_vaddwod_d_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vaddwod_d_wu(_1, _2);}
+v4i32 __lsx_vaddwod_w_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vaddwod_w_hu(_1, _2);}
+v8i16 __lsx_vaddwod_h_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vaddwod_h_bu(_1, _2);}
+v2i64 __lsx_vaddwev_d_wu_w(v4u32 _1, v4i32 _2){return __builtin_lsx_vaddwev_d_wu_w(_1, _2);}
+v4i32 __lsx_vaddwev_w_hu_h(v8u16 _1, v8i16 _2){return __builtin_lsx_vaddwev_w_hu_h(_1, _2);}
+v8i16 __lsx_vaddwev_h_bu_b(v16u8 _1, v16i8 _2){return __builtin_lsx_vaddwev_h_bu_b(_1, _2);}
+v2i64 __lsx_vaddwod_d_wu_w(v4u32 _1, v4i32 _2){return __builtin_lsx_vaddwod_d_wu_w(_1, _2);}
+v4i32 __lsx_vaddwod_w_hu_h(v8u16 _1, v8i16 _2){return __builtin_lsx_vaddwod_w_hu_h(_1, _2);}
+v8i16 __lsx_vaddwod_h_bu_b(v16u8 _1, v16i8 _2){return __builtin_lsx_vaddwod_h_bu_b(_1, _2);}
+v2i64 __lsx_vsubwev_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsubwev_d_w(_1, _2);}
+v4i32 __lsx_vsubwev_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsubwev_w_h(_1, _2);}
+v8i16 __lsx_vsubwev_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsubwev_h_b(_1, _2);}
+v2i64 __lsx_vsubwod_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vsubwod_d_w(_1, _2);}
+v4i32 __lsx_vsubwod_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vsubwod_w_h(_1, _2);}
+v8i16 __lsx_vsubwod_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vsubwod_h_b(_1, _2);}
+v2i64 __lsx_vsubwev_d_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vsubwev_d_wu(_1, _2);}
+v4i32 __lsx_vsubwev_w_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vsubwev_w_hu(_1, _2);}
+v8i16 __lsx_vsubwev_h_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vsubwev_h_bu(_1, _2);}
+v2i64 __lsx_vsubwod_d_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vsubwod_d_wu(_1, _2);}
+v4i32 __lsx_vsubwod_w_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vsubwod_w_hu(_1, _2);}
+v8i16 __lsx_vsubwod_h_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vsubwod_h_bu(_1, _2);}
+v2i64 __lsx_vaddwev_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vaddwev_q_d(_1, _2);}
+v2i64 __lsx_vaddwod_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vaddwod_q_d(_1, _2);}
+v2i64 __lsx_vaddwev_q_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vaddwev_q_du(_1, _2);}
+v2i64 __lsx_vaddwod_q_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vaddwod_q_du(_1, _2);}
+v2i64 __lsx_vsubwev_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsubwev_q_d(_1, _2);}
+v2i64 __lsx_vsubwod_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vsubwod_q_d(_1, _2);}
+v2i64 __lsx_vsubwev_q_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vsubwev_q_du(_1, _2);}
+v2i64 __lsx_vsubwod_q_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vsubwod_q_du(_1, _2);}
+v2i64 __lsx_vaddwev_q_du_d(v2u64 _1, v2i64 _2){return __builtin_lsx_vaddwev_q_du_d(_1, _2);}
+v2i64 __lsx_vaddwod_q_du_d(v2u64 _1, v2i64 _2){return __builtin_lsx_vaddwod_q_du_d(_1, _2);}
+v2i64 __lsx_vmulwev_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmulwev_d_w(_1, _2);}
+v4i32 __lsx_vmulwev_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmulwev_w_h(_1, _2);}
+v8i16 __lsx_vmulwev_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmulwev_h_b(_1, _2);}
+v2i64 __lsx_vmulwod_d_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vmulwod_d_w(_1, _2);}
+v4i32 __lsx_vmulwod_w_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vmulwod_w_h(_1, _2);}
+v8i16 __lsx_vmulwod_h_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vmulwod_h_b(_1, _2);}
+v2i64 __lsx_vmulwev_d_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vmulwev_d_wu(_1, _2);}
+v4i32 __lsx_vmulwev_w_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vmulwev_w_hu(_1, _2);}
+v8i16 __lsx_vmulwev_h_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vmulwev_h_bu(_1, _2);}
+v2i64 __lsx_vmulwod_d_wu(v4u32 _1, v4u32 _2){return __builtin_lsx_vmulwod_d_wu(_1, _2);}
+v4i32 __lsx_vmulwod_w_hu(v8u16 _1, v8u16 _2){return __builtin_lsx_vmulwod_w_hu(_1, _2);}
+v8i16 __lsx_vmulwod_h_bu(v16u8 _1, v16u8 _2){return __builtin_lsx_vmulwod_h_bu(_1, _2);}
+v2i64 __lsx_vmulwev_d_wu_w(v4u32 _1, v4i32 _2){return __builtin_lsx_vmulwev_d_wu_w(_1, _2);}
+v4i32 __lsx_vmulwev_w_hu_h(v8u16 _1, v8i16 _2){return __builtin_lsx_vmulwev_w_hu_h(_1, _2);}
+v8i16 __lsx_vmulwev_h_bu_b(v16u8 _1, v16i8 _2){return __builtin_lsx_vmulwev_h_bu_b(_1, _2);}
+v2i64 __lsx_vmulwod_d_wu_w(v4u32 _1, v4i32 _2){return __builtin_lsx_vmulwod_d_wu_w(_1, _2);}
+v4i32 __lsx_vmulwod_w_hu_h(v8u16 _1, v8i16 _2){return __builtin_lsx_vmulwod_w_hu_h(_1, _2);}
+v8i16 __lsx_vmulwod_h_bu_b(v16u8 _1, v16i8 _2){return __builtin_lsx_vmulwod_h_bu_b(_1, _2);}
+v2i64 __lsx_vmulwev_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmulwev_q_d(_1, _2);}
+v2i64 __lsx_vmulwod_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vmulwod_q_d(_1, _2);}
+v2i64 __lsx_vmulwev_q_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vmulwev_q_du(_1, _2);}
+v2i64 __lsx_vmulwod_q_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vmulwod_q_du(_1, _2);}
+v2i64 __lsx_vmulwev_q_du_d(v2u64 _1, v2i64 _2){return __builtin_lsx_vmulwev_q_du_d(_1, _2);}
+v2i64 __lsx_vmulwod_q_du_d(v2u64 _1, v2i64 _2){return __builtin_lsx_vmulwod_q_du_d(_1, _2);}
+v2i64 __lsx_vhaddw_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vhaddw_q_d(_1, _2);}
+v2u64 __lsx_vhaddw_qu_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vhaddw_qu_du(_1, _2);}
+v2i64 __lsx_vhsubw_q_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vhsubw_q_d(_1, _2);}
+v2u64 __lsx_vhsubw_qu_du(v2u64 _1, v2u64 _2){return __builtin_lsx_vhsubw_qu_du(_1, _2);}
+v2i64 __lsx_vmaddwev_d_w(v2i64 _1, v4i32 _2, v4i32 _3){return __builtin_lsx_vmaddwev_d_w(_1, _2, _3);}
+v4i32 __lsx_vmaddwev_w_h(v4i32 _1, v8i16 _2, v8i16 _3){return __builtin_lsx_vmaddwev_w_h(_1, _2, _3);}
+v8i16 __lsx_vmaddwev_h_b(v8i16 _1, v16i8 _2, v16i8 _3){return __builtin_lsx_vmaddwev_h_b(_1, _2, _3);}
+v2u64 __lsx_vmaddwev_d_wu(v2u64 _1, v4u32 _2, v4u32 _3){return __builtin_lsx_vmaddwev_d_wu(_1, _2, _3);}
+v4u32 __lsx_vmaddwev_w_hu(v4u32 _1, v8u16 _2, v8u16 _3){return __builtin_lsx_vmaddwev_w_hu(_1, _2, _3);}
+v8u16 __lsx_vmaddwev_h_bu(v8u16 _1, v16u8 _2, v16u8 _3){return __builtin_lsx_vmaddwev_h_bu(_1, _2, _3);}
+v2i64 __lsx_vmaddwod_d_w(v2i64 _1, v4i32 _2, v4i32 _3){return __builtin_lsx_vmaddwod_d_w(_1, _2, _3);}
+v4i32 __lsx_vmaddwod_w_h(v4i32 _1, v8i16 _2, v8i16 _3){return __builtin_lsx_vmaddwod_w_h(_1, _2, _3);}
+v8i16 __lsx_vmaddwod_h_b(v8i16 _1, v16i8 _2, v16i8 _3){return __builtin_lsx_vmaddwod_h_b(_1, _2, _3);}
+v2u64 __lsx_vmaddwod_d_wu(v2u64 _1, v4u32 _2, v4u32 _3){return __builtin_lsx_vmaddwod_d_wu(_1, _2, _3);}
+v4u32 __lsx_vmaddwod_w_hu(v4u32 _1, v8u16 _2, v8u16 _3){return __builtin_lsx_vmaddwod_w_hu(_1, _2, _3);}
+v8u16 __lsx_vmaddwod_h_bu(v8u16 _1, v16u8 _2, v16u8 _3){return __builtin_lsx_vmaddwod_h_bu(_1, _2, _3);}
+v2i64 __lsx_vmaddwev_d_wu_w(v2i64 _1, v4u32 _2, v4i32 _3){return __builtin_lsx_vmaddwev_d_wu_w(_1, _2, _3);}
+v4i32 __lsx_vmaddwev_w_hu_h(v4i32 _1, v8u16 _2, v8i16 _3){return __builtin_lsx_vmaddwev_w_hu_h(_1, _2, _3);}
+v8i16 __lsx_vmaddwev_h_bu_b(v8i16 _1, v16u8 _2, v16i8 _3){return __builtin_lsx_vmaddwev_h_bu_b(_1, _2, _3);}
+v2i64 __lsx_vmaddwod_d_wu_w(v2i64 _1, v4u32 _2, v4i32 _3){return __builtin_lsx_vmaddwod_d_wu_w(_1, _2, _3);}
+v4i32 __lsx_vmaddwod_w_hu_h(v4i32 _1, v8u16 _2, v8i16 _3){return __builtin_lsx_vmaddwod_w_hu_h(_1, _2, _3);}
+v8i16 __lsx_vmaddwod_h_bu_b(v8i16 _1, v16u8 _2, v16i8 _3){return __builtin_lsx_vmaddwod_h_bu_b(_1, _2, _3);}
+v2i64 __lsx_vmaddwev_q_d(v2i64 _1, v2i64 _2, v2i64 _3){return __builtin_lsx_vmaddwev_q_d(_1, _2, _3);}
+v2i64 __lsx_vmaddwod_q_d(v2i64 _1, v2i64 _2, v2i64 _3){return __builtin_lsx_vmaddwod_q_d(_1, _2, _3);}
+v2u64 __lsx_vmaddwev_q_du(v2u64 _1, v2u64 _2, v2u64 _3){return __builtin_lsx_vmaddwev_q_du(_1, _2, _3);}
+v2u64 __lsx_vmaddwod_q_du(v2u64 _1, v2u64 _2, v2u64 _3){return __builtin_lsx_vmaddwod_q_du(_1, _2, _3);}
+v2i64 __lsx_vmaddwev_q_du_d(v2i64 _1, v2u64 _2, v2i64 _3){return __builtin_lsx_vmaddwev_q_du_d(_1, _2, _3);}
+v2i64 __lsx_vmaddwod_q_du_d(v2i64 _1, v2u64 _2, v2i64 _3){return __builtin_lsx_vmaddwod_q_du_d(_1, _2, _3);}
+v16i8 __lsx_vrotr_b(v16i8 _1, v16i8 _2){return __builtin_lsx_vrotr_b(_1, _2);}
+v8i16 __lsx_vrotr_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vrotr_h(_1, _2);}
+v4i32 __lsx_vrotr_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vrotr_w(_1, _2);}
+v2i64 __lsx_vrotr_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vrotr_d(_1, _2);}
+v2i64 __lsx_vadd_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vadd_q(_1, _2);}
+v2i64 __lsx_vsub_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vsub_q(_1, _2);}
+v16i8 __lsx_vldrepl_b(void * _1){return __builtin_lsx_vldrepl_b(_1, 1);}
+v8i16 __lsx_vldrepl_h(void * _1){return __builtin_lsx_vldrepl_h(_1, 2);}
+v4i32 __lsx_vldrepl_w(void * _1){return __builtin_lsx_vldrepl_w(_1, 4);}
+v2i64 __lsx_vldrepl_d(void * _1){return __builtin_lsx_vldrepl_d(_1, 8);}
+v16i8 __lsx_vmskgez_b(v16i8 _1){return __builtin_lsx_vmskgez_b(_1);}
+v16i8 __lsx_vmsknz_b(v16i8 _1){return __builtin_lsx_vmsknz_b(_1);}
+v8i16 __lsx_vexth_h_b(v16i8 _1){return __builtin_lsx_vexth_h_b(_1);}
+v4i32 __lsx_vexth_w_h(v8i16 _1){return __builtin_lsx_vexth_w_h(_1);}
+v2i64 __lsx_vexth_d_w(v4i32 _1){return __builtin_lsx_vexth_d_w(_1);}
+v2i64 __lsx_vexth_q_d(v2i64 _1){return __builtin_lsx_vexth_q_d(_1);}
+v8u16 __lsx_vexth_hu_bu(v16u8 _1){return __builtin_lsx_vexth_hu_bu(_1);}
+v4u32 __lsx_vexth_wu_hu(v8u16 _1){return __builtin_lsx_vexth_wu_hu(_1);}
+v2u64 __lsx_vexth_du_wu(v4u32 _1){return __builtin_lsx_vexth_du_wu(_1);}
+v2u64 __lsx_vexth_qu_du(v2u64 _1){return __builtin_lsx_vexth_qu_du(_1);}
+v16i8 __lsx_vrotri_b(v16i8 _1){return __builtin_lsx_vrotri_b(_1, 1);}
+v8i16 __lsx_vrotri_h(v8i16 _1){return __builtin_lsx_vrotri_h(_1, 1);}
+v4i32 __lsx_vrotri_w(v4i32 _1){return __builtin_lsx_vrotri_w(_1, 1);}
+v2i64 __lsx_vrotri_d(v2i64 _1){return __builtin_lsx_vrotri_d(_1, 1);}
+v2i64 __lsx_vextl_q_d(v2i64 _1){return __builtin_lsx_vextl_q_d(_1);}
+v16i8 __lsx_vsrlni_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrlni_b_h(_1, _2, 1);}
+v8i16 __lsx_vsrlni_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrlni_h_w(_1, _2, 1);}
+v4i32 __lsx_vsrlni_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrlni_w_d(_1, _2, 1);}
+v2i64 __lsx_vsrlni_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrlni_d_q(_1, _2, 1);}
+v16i8 __lsx_vsrlrni_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrlrni_b_h(_1, _2, 1);}
+v8i16 __lsx_vsrlrni_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrlrni_h_w(_1, _2, 1);}
+v4i32 __lsx_vsrlrni_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrlrni_w_d(_1, _2, 1);}
+v2i64 __lsx_vsrlrni_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrlrni_d_q(_1, _2, 1);}
+v16i8 __lsx_vssrlni_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vssrlni_b_h(_1, _2, 1);}
+v8i16 __lsx_vssrlni_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrlni_h_w(_1, _2, 1);}
+v4i32 __lsx_vssrlni_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrlni_w_d(_1, _2, 1);}
+v2i64 __lsx_vssrlni_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrlni_d_q(_1, _2, 1);}
+v16u8 __lsx_vssrlni_bu_h(v16u8 _1, v16i8 _2){return __builtin_lsx_vssrlni_bu_h(_1, _2, 1);}
+v8u16 __lsx_vssrlni_hu_w(v8u16 _1, v8i16 _2){return __builtin_lsx_vssrlni_hu_w(_1, _2, 1);}
+v4u32 __lsx_vssrlni_wu_d(v4u32 _1, v4i32 _2){return __builtin_lsx_vssrlni_wu_d(_1, _2, 1);}
+v2u64 __lsx_vssrlni_du_q(v2u64 _1, v2i64 _2){return __builtin_lsx_vssrlni_du_q(_1, _2, 1);}
+v16i8 __lsx_vssrlrni_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vssrlrni_b_h(_1, _2, 1);}
+v8i16 __lsx_vssrlrni_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrlrni_h_w(_1, _2, 1);}
+v4i32 __lsx_vssrlrni_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrlrni_w_d(_1, _2, 1);}
+v2i64 __lsx_vssrlrni_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrlrni_d_q(_1, _2, 1);}
+v16u8 __lsx_vssrlrni_bu_h(v16u8 _1, v16i8 _2){return __builtin_lsx_vssrlrni_bu_h(_1, _2, 1);}
+v8u16 __lsx_vssrlrni_hu_w(v8u16 _1, v8i16 _2){return __builtin_lsx_vssrlrni_hu_w(_1, _2, 1);}
+v4u32 __lsx_vssrlrni_wu_d(v4u32 _1, v4i32 _2){return __builtin_lsx_vssrlrni_wu_d(_1, _2, 1);}
+v2u64 __lsx_vssrlrni_du_q(v2u64 _1, v2i64 _2){return __builtin_lsx_vssrlrni_du_q(_1, _2, 1);}
+v16i8 __lsx_vsrani_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrani_b_h(_1, _2, 1);}
+v8i16 __lsx_vsrani_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrani_h_w(_1, _2, 1);}
+v4i32 __lsx_vsrani_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrani_w_d(_1, _2, 1);}
+v2i64 __lsx_vsrani_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrani_d_q(_1, _2, 1);}
+v16i8 __lsx_vsrarni_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vsrarni_b_h(_1, _2, 1);}
+v8i16 __lsx_vsrarni_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vsrarni_h_w(_1, _2, 1);}
+v4i32 __lsx_vsrarni_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vsrarni_w_d(_1, _2, 1);}
+v2i64 __lsx_vsrarni_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vsrarni_d_q(_1, _2, 1);}
+v16i8 __lsx_vssrani_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vssrani_b_h(_1, _2, 1);}
+v8i16 __lsx_vssrani_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrani_h_w(_1, _2, 1);}
+v4i32 __lsx_vssrani_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrani_w_d(_1, _2, 1);}
+v2i64 __lsx_vssrani_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrani_d_q(_1, _2, 1);}
+v16u8 __lsx_vssrani_bu_h(v16u8 _1, v16i8 _2){return __builtin_lsx_vssrani_bu_h(_1, _2, 1);}
+v8u16 __lsx_vssrani_hu_w(v8u16 _1, v8i16 _2){return __builtin_lsx_vssrani_hu_w(_1, _2, 1);}
+v4u32 __lsx_vssrani_wu_d(v4u32 _1, v4i32 _2){return __builtin_lsx_vssrani_wu_d(_1, _2, 1);}
+v2u64 __lsx_vssrani_du_q(v2u64 _1, v2i64 _2){return __builtin_lsx_vssrani_du_q(_1, _2, 1);}
+v16i8 __lsx_vssrarni_b_h(v16i8 _1, v16i8 _2){return __builtin_lsx_vssrarni_b_h(_1, _2, 1);}
+v8i16 __lsx_vssrarni_h_w(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrarni_h_w(_1, _2, 1);}
+v4i32 __lsx_vssrarni_w_d(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrarni_w_d(_1, _2, 1);}
+v2i64 __lsx_vssrarni_d_q(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrarni_d_q(_1, _2, 1);}
+v16u8 __lsx_vssrarni_bu_h(v16u8 _1, v16i8 _2){return __builtin_lsx_vssrarni_bu_h(_1, _2, 1);}
+v8u16 __lsx_vssrarni_hu_w(v8u16 _1, v8i16 _2){return __builtin_lsx_vssrarni_hu_w(_1, _2, 1);}
+v4u32 __lsx_vssrarni_wu_d(v4u32 _1, v4i32 _2){return __builtin_lsx_vssrarni_wu_d(_1, _2, 1);}
+v2u64 __lsx_vssrarni_du_q(v2u64 _1, v2i64 _2){return __builtin_lsx_vssrarni_du_q(_1, _2, 1);}
+v4i32 __lsx_vpermi_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vpermi_w(_1, _2, 1);}
+v16i8 __lsx_vld(void * _1){return __builtin_lsx_vld(_1, 1);}
+void __lsx_vst(v16i8 _1, void * _2){return __builtin_lsx_vst(_1, _2, 1);}
+v16i8 __lsx_vssrlrn_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrlrn_b_h(_1, _2);}
+v8i16 __lsx_vssrlrn_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrlrn_h_w(_1, _2);}
+v4i32 __lsx_vssrlrn_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrlrn_w_d(_1, _2);}
+v16i8 __lsx_vssrln_b_h(v8i16 _1, v8i16 _2){return __builtin_lsx_vssrln_b_h(_1, _2);}
+v8i16 __lsx_vssrln_h_w(v4i32 _1, v4i32 _2){return __builtin_lsx_vssrln_h_w(_1, _2);}
+v4i32 __lsx_vssrln_w_d(v2i64 _1, v2i64 _2){return __builtin_lsx_vssrln_w_d(_1, _2);}
+v16i8 __lsx_vorn_v(v16i8 _1, v16i8 _2){return __builtin_lsx_vorn_v(_1, _2);}
+v2i64 __lsx_vldi(){return __builtin_lsx_vldi(1);}
+v16i8 __lsx_vshuf_b(v16i8 _1, v16i8 _2, v16i8 _3){return __builtin_lsx_vshuf_b(_1, _2, _3);}
+v16i8 __lsx_vldx(void * _1){return __builtin_lsx_vldx(_1, 1);}
+void __lsx_vstx(v16i8 _1, void * _2){return __builtin_lsx_vstx(_1, _2, 1);}
+v2u64 __lsx_vextl_qu_du(v2u64 _1){return __builtin_lsx_vextl_qu_du(_1);}
+int __lsx_bnz_b(v16u8 _1){return __builtin_lsx_bnz_b(_1);}
+int __lsx_bnz_d(v2u64 _1){return __builtin_lsx_bnz_d(_1);}
+int __lsx_bnz_h(v8u16 _1){return __builtin_lsx_bnz_h(_1);}
+int __lsx_bnz_v(v16u8 _1){return __builtin_lsx_bnz_v(_1);}
+int __lsx_bnz_w(v4u32 _1){return __builtin_lsx_bnz_w(_1);}
+int __lsx_bz_b(v16u8 _1){return __builtin_lsx_bz_b(_1);}
+int __lsx_bz_d(v2u64 _1){return __builtin_lsx_bz_d(_1);}
+int __lsx_bz_h(v8u16 _1){return __builtin_lsx_bz_h(_1);}
+int __lsx_bz_v(v16u8 _1){return __builtin_lsx_bz_v(_1);}
+int __lsx_bz_w(v4u32 _1){return __builtin_lsx_bz_w(_1);}
+v2i64 __lsx_vfcmp_caf_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_caf_d(_1, _2);}
+v4i32 __lsx_vfcmp_caf_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_caf_s(_1, _2);}
+v2i64 __lsx_vfcmp_ceq_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_ceq_d(_1, _2);}
+v4i32 __lsx_vfcmp_ceq_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_ceq_s(_1, _2);}
+v2i64 __lsx_vfcmp_cle_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cle_d(_1, _2);}
+v4i32 __lsx_vfcmp_cle_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cle_s(_1, _2);}
+v2i64 __lsx_vfcmp_clt_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_clt_d(_1, _2);}
+v4i32 __lsx_vfcmp_clt_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_clt_s(_1, _2);}
+v2i64 __lsx_vfcmp_cne_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cne_d(_1, _2);}
+v4i32 __lsx_vfcmp_cne_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cne_s(_1, _2);}
+v2i64 __lsx_vfcmp_cor_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cor_d(_1, _2);}
+v4i32 __lsx_vfcmp_cor_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cor_s(_1, _2);}
+v2i64 __lsx_vfcmp_cueq_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cueq_d(_1, _2);}
+v4i32 __lsx_vfcmp_cueq_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cueq_s(_1, _2);}
+v2i64 __lsx_vfcmp_cule_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cule_d(_1, _2);}
+v4i32 __lsx_vfcmp_cule_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cule_s(_1, _2);}
+v2i64 __lsx_vfcmp_cult_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cult_d(_1, _2);}
+v4i32 __lsx_vfcmp_cult_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cult_s(_1, _2);}
+v2i64 __lsx_vfcmp_cun_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cun_d(_1, _2);}
+v2i64 __lsx_vfcmp_cune_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_cune_d(_1, _2);}
+v4i32 __lsx_vfcmp_cune_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cune_s(_1, _2);}
+v4i32 __lsx_vfcmp_cun_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_cun_s(_1, _2);}
+v2i64 __lsx_vfcmp_saf_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_saf_d(_1, _2);}
+v4i32 __lsx_vfcmp_saf_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_saf_s(_1, _2);}
+v2i64 __lsx_vfcmp_seq_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_seq_d(_1, _2);}
+v4i32 __lsx_vfcmp_seq_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_seq_s(_1, _2);}
+v2i64 __lsx_vfcmp_sle_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sle_d(_1, _2);}
+v4i32 __lsx_vfcmp_sle_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sle_s(_1, _2);}
+v2i64 __lsx_vfcmp_slt_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_slt_d(_1, _2);}
+v4i32 __lsx_vfcmp_slt_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_slt_s(_1, _2);}
+v2i64 __lsx_vfcmp_sne_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sne_d(_1, _2);}
+v4i32 __lsx_vfcmp_sne_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sne_s(_1, _2);}
+v2i64 __lsx_vfcmp_sor_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sor_d(_1, _2);}
+v4i32 __lsx_vfcmp_sor_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sor_s(_1, _2);}
+v2i64 __lsx_vfcmp_sueq_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sueq_d(_1, _2);}
+v4i32 __lsx_vfcmp_sueq_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sueq_s(_1, _2);}
+v2i64 __lsx_vfcmp_sule_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sule_d(_1, _2);}
+v4i32 __lsx_vfcmp_sule_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sule_s(_1, _2);}
+v2i64 __lsx_vfcmp_sult_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sult_d(_1, _2);}
+v4i32 __lsx_vfcmp_sult_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sult_s(_1, _2);}
+v2i64 __lsx_vfcmp_sun_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sun_d(_1, _2);}
+v2i64 __lsx_vfcmp_sune_d(v2f64 _1, v2f64 _2){return __builtin_lsx_vfcmp_sune_d(_1, _2);}
+v4i32 __lsx_vfcmp_sune_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sune_s(_1, _2);}
+v4i32 __lsx_vfcmp_sun_s(v4f32 _1, v4f32 _2){return __builtin_lsx_vfcmp_sun_s(_1, _2);}
+v16i8 __lsx_vrepli_b(){return __builtin_lsx_vrepli_b(1);}
+v2i64 __lsx_vrepli_d(){return __builtin_lsx_vrepli_d(1);}
+v8i16 __lsx_vrepli_h(){return __builtin_lsx_vrepli_h(1);}
+v4i32 __lsx_vrepli_w(){return __builtin_lsx_vrepli_w(1);}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-cmp.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-cmp.c
new file mode 100644
index 00000000000..cfdf0afdbe2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-cmp.c
@@ -0,0 +1,3354 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff800000c3080002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfedb27095b6bff95;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040000000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000001000f00fe00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000017fff00fe7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x007ffd0001400840;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff01ff010000ff7d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000fffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffa6ff91fdd8ef77;
+  *((unsigned long*)& __m128i_op0[0]) = 0x061202bffb141c38;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fef01000f27ca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2a29282726252423;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff80ff00ff80ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c7c266e71768fa4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001a0000000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_b(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000002a001a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000001a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x05f5e2320605e1e2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_d(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0d060d060d060d06;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0d060d060d060d06;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff2356fe165486;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5efeb3165bd7653d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseqi_w(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseqi_h(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ed0008005e00a2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x007a007600150077;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ed0008005e00a2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007a007600150077;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c63636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff489b693120950;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc45a851c40c18;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfda9b23a624082fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff7f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2d1da85b7f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7505853d654185f5;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01010000fefe0101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1fc000001fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1fc000001fc00000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff00000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000067400002685;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9795698585057dec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x87f82867431a1d08;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1149a96eb1a08000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e19181716;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffe1ffffffe1;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffe1ffffffe1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m128i_op1[0]) = 0x363d753d50155c0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f0f0f0f00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000fffe01fd02;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5b5b5b5aadadadad;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000052525253;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff00ffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00ffffffffff;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x33f5c2d7d9f5d800;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe4c23ffb002a3a22;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000004870ba0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000044470000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0000ffff;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000404040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000005c000000b2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000007600000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffffffff;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c0dec4d1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vseq_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001000000048;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffeffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000090900000998;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff00ffffff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f7f7f007f7f7f00;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf2c97aaa7d8fa270;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0b73e427f7cfcb88;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff01fe03ff01fe03;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vseq_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vseq_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003900;
+  *((unsigned long*)& __m128i_op0[0]) = 0x68bcf93435ed25ed;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fc000003fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fc000003fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd82480697f678077;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff489b693120950;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffc45a851c40c18;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000001fc1a568;
+  *((unsigned long*)& __m128i_op0[0]) = 0x02693fe0e7beb077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,-6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000f0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_b(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1268f057137a0267;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0048137ef886fae0;
+  *((unsigned long*)& __m128i_result[1]) = 0xff000000ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00ff0000000000;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f000d200e000c20;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_b(__m128i_op0,-6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff7a53;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffffffffff;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111113111111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf1819b7c0732a6b6;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffb9917a6e7fffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa2a2a2a3a2a2a2a3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc605c000aedd0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000202fe02;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff00ff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff6080ffff4417;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0037ffc8d7ff2800;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff00008080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001b4a00007808;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_b(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x31dc2cc1bc268c93;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c4d53d855f89514;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00000000ffff;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffffffffff;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_bu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_hu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd78cfd70b5f65d76;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5779108fdedda7e4;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000200008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffff00ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffff00ffff;
+  __m128i_out = __lsx_vslei_b(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_w(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffffff00ffffff;
+  __m128i_out = __lsx_vslei_b(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00250023001c001d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x309d2f342a5d2b34;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_du(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_wu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf03ef03ef03ef03e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf03ef03ef03ef03e;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslei_d(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslei_h(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001000100010c410;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000036280000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x42a0000042a02000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x004200a000200000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff00ffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000007f0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000501000002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100000008;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80ff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff80000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff0600d50e9ef518;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffefffa8007c000f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0005000400000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400001001150404;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0005000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0400001001150404;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vsle_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000aaaaaaaa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000aaab555b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000aaaaaaaa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000aaab555b;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000001faea9ec;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0100000001000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100007f01;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000408;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000ed0e0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000004080;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000ed0e0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000004080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000ed0e0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000004080;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x8);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003030000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0100000001000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100000001000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffcafff8ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffcafff8ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004cff8fffde0051;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfbfbfb17fbfb38ea;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfbfb47fbfbfb0404;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000005fffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x55aa55aa55aa55ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaa55555655aaaaa8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00000000ffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6b6c4beb636443e3;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0507070805070708;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000003fffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000040400000404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000040400000404;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000040002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000bffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffc0800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000005003a;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000085af0000b000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00017ea200002000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f7f7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00fffbfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01ff1100000048;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000008680f1ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0280000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffffff00000000;
+  __m128i_out = __lsx_vsle_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsle_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000011ff040;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsle_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001802041b0013;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2e2b34ca59fa4c88;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3b2c8aefd44be966;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000080000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5ff6a0a40ea8f47c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5ff6a0a40e9da42a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004200a000200001;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00feff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00feff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff0000000000;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x807f7f8000ffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff00feff00;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0000ffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd82480697f678077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505445465593af1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffe15;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffe15;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff0000ffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff0000ffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007000000050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000007e8a60;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000001edde;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00010000ffab001c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001ffffffadff9a;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffefffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffefffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000008a0000008a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000008900000009;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff00ffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003dbe88077c78c1;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0aa077b7054c9554;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40c7ee1f38e4c4e8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x371fe00000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x371fe00000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc000ffffc005;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbe8282a0793636d3;
+  *((unsigned long*)& __m128i_op0[0]) = 0x793636d3793636d3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a3a3a3b3a3a3a3a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3a3a00003a3a0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100010001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000005e695e95;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5e695e96c396b402;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0103000201030002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x975ca6046e2e4889;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1748c4f9ed1a5870;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6b75948a91407a42;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0b5471b633e54fde;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff00000000;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff7300000ca00430;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001a00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0006000100040001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00010002ffff0105;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_b(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000235600005486;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000b31600006544;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_wu(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_du(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e880ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslti_d(__m128i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa000308000008002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0500847b00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslti_w(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_bu(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000009c83e21a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000022001818;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslti_hu(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslti_h(__m128i_op0,15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001000100010c410;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007658000115de0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001a8960001d2cc0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0007658000115de0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001a8960001d2cc0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffffff00ffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ffffff;
+  __m128i_out = __lsx_vslt_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007658000115de0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001a8960001d2cc0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffc000007fc00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9e801ffc7fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ffff0000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff00ff0000ff;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0403cfcf01c1595e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x837cd5db43fc55d4;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000040100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000384;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe3f0200004003ffd;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff00ff00ff00;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x800000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ffffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0004000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004000000040000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c9c9c9c63636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000145ad;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000300003e6e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_op0[0]) = 0x803f800080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000001ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe80ffffffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f0101070101010f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000127f010116;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ffffffffff;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4eede8494f000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1817161517161514;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1615141315141312;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffefefffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffefefffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0fff0fff7f800fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e19181716;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00005dcbe7e830c0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x03f21e0114bf19da;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001f5400000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4050000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001c88bf0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000320;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000007730;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000ffef0010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff0000ff0000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5e695e95e1cb5a01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff00000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002008360500088;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400028000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff02000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1748c4f9ed1a5870;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00000000ffff;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vslt_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0313100003131000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff359f358;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffff359f358;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffff00ff00;
+  __m128i_out = __lsx_vslt_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000024170000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000002a001a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001a000b00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636163636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff001a00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffc0ffc0003f;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffc0ffc0003f003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff0000000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ff00ff;
+  __m128i_out = __lsx_vslt_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000010a7;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000046ebaa2c;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf1f1f1f1865e65a1;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000003ff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000467fef81;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff000086bd;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ca000000c481;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00050eb00000fffa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000f8a50000f310;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00050eb00000fffa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000f8a50000f310;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vslt_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x317fce80317fce80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fffe0000fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vslt_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vslt_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vslt_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000011f0000f040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0177fff0fffffff0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000011ff8bc;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vslt_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000005050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0505000005050505;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000d02540000007e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001400140014;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0505050505050505;
+  *((unsigned long*)& __m128i_op2[0]) = 0x03574e38e496cbc9;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005000400000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0400001001150404;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080001300000013;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080001300000013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080001300000013;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080001300000013;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffe0001fffe0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x62cbf96e4acfaf40;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf0bc9a5278285a4a;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x62cbf84c02cbac00;
+  *((unsigned long*)& __m128i_result[0]) = 0x1014120210280240;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff59;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff59;
+  __m128i_out = __lsx_vbitsel_v(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6664666466646664;
+  *((unsigned long*)& __m128i_result[0]) = 0x6664666466646664;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff0000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x5d5d5d5d5d5d5d55;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0x5d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x5959595959595959;
+  *((unsigned long*)& __m128i_result[0]) = 0x5959595959595959;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0x59);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0xaa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0b4c600000000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004280808080808;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0xa4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007770ffff9411;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007770ffff941d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_result[0]) = 0x000047404f4f040d;
+  __m128i_out = __lsx_vbitseli_b(__m128i_op0,__m128i_op1,0x4f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-arith.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-arith.c
new file mode 100644
index 00000000000..c11c0f5a69e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-arith.c
@@ -0,0 +1,3713 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000fea8ff44;
+  *((unsigned long*)& __m128d_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128d_op1[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m128d_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128d_result[0]) = 0x2020202020202020;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128d_result[0]) = 0x1000100010001000;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x000000000000000f;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000010100fe0101;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffff0200ffff01ff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0001010100fe0100;
+  *((unsigned long*)& __m128d_result[0]) = 0xffff0200ffff01ff;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x7fff0101ffffe000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7fffffffa0204000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x7f370101ff04ffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7f3bffffa0226021;
+  *((unsigned long*)& __m128d_result[1]) = 0x7fff0101ffffe000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7fffffffa0204000;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128d_op1[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128d_op1[0]) = 0x27b1b106b8145f50;
+  *((unsigned long*)& __m128d_result[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128d_result[0]) = 0x27b1b106b8145f50;
+  __m128d_out = __lsx_vfadd_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000100000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x1000100000001000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000100000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x1000100000001000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000007000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x01533b5e7489ae24;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffab7e71e33848;
+  *((unsigned long*)& __m128d_op1[1]) = 0x01533b5e7489ae24;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffab7e71e33848;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffab7e71e33848;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmul_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128d_result[1]) = 0x800000ff000000ff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128d_result[1]) = 0x80000000fff8fff8;
+  *((unsigned long*)& __m128d_result[0]) = 0x80000000fff80000;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128d_op1[1]) = 0xb55ccf30f52a6a68;
+  *((unsigned long*)& __m128d_op1[0]) = 0x4e0018eceb82c53a;
+  *((unsigned long*)& __m128d_result[1]) = 0x355ccf30f52a6a68;
+  *((unsigned long*)& __m128d_result[0]) = 0xce0018eceb82c53a;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffff00006c82;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00009b140000917b;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffff00006c82;
+  *((unsigned long*)& __m128d_result[0]) = 0x00009b140000917b;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000100000020;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000083b00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xe93d0bd19ff0c170;
+  *((unsigned long*)& __m128d_op0[0]) = 0x5237c1bac9eadf55;
+  *((unsigned long*)& __m128d_op1[1]) = 0xe6d4572c8a5835bc;
+  *((unsigned long*)& __m128d_op1[0]) = 0xe5017c2ac9ca9fd0;
+  *((unsigned long*)& __m128d_result[1]) = 0xe93d0bd19ff07013;
+  *((unsigned long*)& __m128d_result[0]) = 0x65017c2ac9ca9fd0;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xe93d0bd19ff07013;
+  *((unsigned long*)& __m128d_op0[0]) = 0x65017c2ac9ca9fd0;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00008bf700017052;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000f841000091aa;
+  *((unsigned long*)& __m128d_result[1]) = 0xe93d0bd19ff07013;
+  *((unsigned long*)& __m128d_result[0]) = 0x65017c2ac9ca9fd0;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000004000000002;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x5555410154551515;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0004455501500540;
+  *((unsigned long*)& __m128d_result[1]) = 0xd555410154551515;
+  *((unsigned long*)& __m128d_result[0]) = 0x8004455501500540;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x000300037ff000ff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0003000300a10003;
+  *((unsigned long*)& __m128d_op1[1]) = 0x000000007ff000ff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0003000300000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0003000300a10003;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x56a09e662ab46b31;
+  *((unsigned long*)& __m128d_op1[0]) = 0xb4b8122ef4054bb3;
+  *((unsigned long*)& __m128d_result[1]) = 0xd6a09e662ab46b31;
+  *((unsigned long*)& __m128d_result[0]) = 0x34b8122ef4054bb3;
+  __m128d_out = __lsx_vfsub_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x7f4000007f040000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7f0200007f020000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128d_result[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffff01018888;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000100007f01;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff8000000000000;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffefefffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0400000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffefefffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff8000000000000;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000ff801c9e;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000810000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x40eff02383e383e4;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000cd630000cd63;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffff00000000ffff;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x000aa822a79308f6;
+  *((unsigned long*)& __m128d_op1[0]) = 0x03aa558e1d37b5a1;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128d_op1[0]) = 0xfffefffe011df03e;
+  *((unsigned long*)& __m128d_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128d_result[0]) = 0xfffffffefffffffe;
+  __m128d_out = __lsx_vfdiv_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x05050505;
+  *((int*)& __m128_op0[2]) = 0x05050505;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x05050000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x03574e38;
+  *((int*)& __m128_op1[0]) = 0xe496cbc9;
+  *((int*)& __m128_result[3]) = 0x05050505;
+  *((int*)& __m128_result[2]) = 0x05050505;
+  *((int*)& __m128_result[1]) = 0x03574e38;
+  *((int*)& __m128_result[0]) = 0xe496cbc9;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x0000000f;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00077f88;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00077f97;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x000000ff;
+  *((int*)& __m128_op0[0]) = 0x000000ff;
+  *((int*)& __m128_op1[3]) = 0x370bdfec;
+  *((int*)& __m128_op1[2]) = 0xffecffec;
+  *((int*)& __m128_op1[1]) = 0x370bdfec;
+  *((int*)& __m128_op1[0]) = 0xffecffec;
+  *((int*)& __m128_result[3]) = 0x370bdfec;
+  *((int*)& __m128_result[2]) = 0xffecffec;
+  *((int*)& __m128_result[1]) = 0x370bdfec;
+  *((int*)& __m128_result[0]) = 0xffecffec;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x0000ff00;
+  *((int*)& __m128_op1[0]) = 0x00ff0000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffff0000;
+  *((int*)& __m128_op0[2]) = 0xffff0000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x40088040;
+  *((int*)& __m128_op1[2]) = 0x80040110;
+  *((int*)& __m128_op1[1]) = 0x40408010;
+  *((int*)& __m128_op1[0]) = 0x80200110;
+  *((int*)& __m128_result[3]) = 0xffff0000;
+  *((int*)& __m128_result[2]) = 0xffff0000;
+  *((int*)& __m128_result[1]) = 0x40408010;
+  *((int*)& __m128_result[0]) = 0x80200110;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xfffffffc;
+  *((int*)& __m128_op1[1]) = 0xffffffff;
+  *((int*)& __m128_op1[0]) = 0xfffffffc;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xfffffffc;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xfffffffc;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000001b;
+  *((int*)& __m128_op0[2]) = 0x0000001b;
+  *((int*)& __m128_op0[1]) = 0x0000001b;
+  *((int*)& __m128_op0[0]) = 0x0000001b;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x0000001b;
+  *((int*)& __m128_result[2]) = 0x0000001b;
+  *((int*)& __m128_result[1]) = 0x0000001b;
+  *((int*)& __m128_result[0]) = 0x0000001b;
+  __m128_out = __lsx_vfadd_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x56411278;
+  *((int*)& __m128_op0[2]) = 0x43c0d41e;
+  *((int*)& __m128_op0[1]) = 0x0124d8f6;
+  *((int*)& __m128_op0[0]) = 0xa494006b;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x05010501;
+  *((int*)& __m128_op1[2]) = 0x05010501;
+  *((int*)& __m128_op1[1]) = 0x05010501;
+  *((int*)& __m128_op1[0]) = 0x0501050c;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x21f32eaf;
+  *((int*)& __m128_op0[2]) = 0x5b7a02c8;
+  *((int*)& __m128_op0[1]) = 0x407c2ca3;
+  *((int*)& __m128_op0[0]) = 0x2cbd0357;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00010400;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xfffe0001;
+  *((int*)& __m128_op0[2]) = 0xfffe0001;
+  *((int*)& __m128_op0[1]) = 0xfffe0001;
+  *((int*)& __m128_op0[0]) = 0xfffe0001;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xfffe0001;
+  *((int*)& __m128_result[2]) = 0xfffe0001;
+  *((int*)& __m128_result[1]) = 0xfffe0001;
+  *((int*)& __m128_result[0]) = 0xfffe0001;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00002ebf;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0xffffffff;
+  *((int*)& __m128_op1[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x01000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00081f1f;
+  *((int*)& __m128_op0[2]) = 0x1f1f1f1f;
+  *((int*)& __m128_op0[1]) = 0x1f1f1f1f;
+  *((int*)& __m128_op0[0]) = 0x1f1f1f1f;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x021b7d24;
+  *((int*)& __m128_op0[2]) = 0x49678a35;
+  *((int*)& __m128_op0[1]) = 0x030298a6;
+  *((int*)& __m128_op0[0]) = 0x21030a49;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000002;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xf6548a17;
+  *((int*)& __m128_op0[2]) = 0x47e59090;
+  *((int*)& __m128_op0[1]) = 0x27b169bb;
+  *((int*)& __m128_op0[0]) = 0xb8145f50;
+  *((int*)& __m128_op1[3]) = 0x004eff62;
+  *((int*)& __m128_op1[2]) = 0x00d2ff76;
+  *((int*)& __m128_op1[1]) = 0xff700028;
+  *((int*)& __m128_op1[0]) = 0x00be00a0;
+  *((int*)& __m128_result[3]) = 0xb7032c34;
+  *((int*)& __m128_result[2]) = 0x093d35ab;
+  *((int*)& __m128_result[1]) = 0xe7a6533b;
+  *((int*)& __m128_result[0]) = 0x800001b8;
+  __m128_out = __lsx_vfmul_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfsub_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x7fff0101;
+  *((int*)& __m128_op0[2]) = 0x81010102;
+  *((int*)& __m128_op0[1]) = 0x7fffffff;
+  *((int*)& __m128_op0[0]) = 0x81010102;
+  *((int*)& __m128_op1[3]) = 0x00000fff;
+  *((int*)& __m128_op1[2]) = 0xffffe000;
+  *((int*)& __m128_op1[1]) = 0x00001020;
+  *((int*)& __m128_op1[0]) = 0x20204000;
+  *((int*)& __m128_result[3]) = 0x7fff0101;
+  *((int*)& __m128_result[2]) = 0xffffe000;
+  *((int*)& __m128_result[1]) = 0x7fffffff;
+  *((int*)& __m128_result[0]) = 0xa0204000;
+  __m128_out = __lsx_vfsub_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00000fff;
+  *((int*)& __m128_op1[2]) = 0xffffe000;
+  *((int*)& __m128_op1[1]) = 0x00001020;
+  *((int*)& __m128_op1[0]) = 0x20204000;
+  *((int*)& __m128_result[3]) = 0x80000fff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0x80001020;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfsub_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfsub_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x54feed87;
+  *((int*)& __m128_op0[2]) = 0xbc3f2be1;
+  *((int*)& __m128_op0[1]) = 0x8064d8f6;
+  *((int*)& __m128_op0[0]) = 0xa494afcb;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xd8248069;
+  *((int*)& __m128_op0[0]) = 0x7f678077;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0xd8248069;
+  *((int*)& __m128_op1[0]) = 0x7f678077;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x7fc00000;
+  *((int*)& __m128_result[1]) = 0x3f800000;
+  *((int*)& __m128_result[0]) = 0x3f800000;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x7fc00000;
+  *((int*)& __m128_result[1]) = 0x7fc00000;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00070000;
+  *((int*)& __m128_op0[2]) = 0x00040000;
+  *((int*)& __m128_op0[1]) = 0x00030000;
+  *((int*)& __m128_op0[0]) = 0x00010000;
+  *((int*)& __m128_op1[3]) = 0x00070000;
+  *((int*)& __m128_op1[2]) = 0x00040000;
+  *((int*)& __m128_op1[1]) = 0x00030000;
+  *((int*)& __m128_op1[0]) = 0x00010000;
+  *((int*)& __m128_result[3]) = 0x3f800000;
+  *((int*)& __m128_result[2]) = 0x3f800000;
+  *((int*)& __m128_result[1]) = 0x3f800000;
+  *((int*)& __m128_result[0]) = 0x3f800000;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00010001;
+  *((int*)& __m128_op1[2]) = 0x0001007c;
+  *((int*)& __m128_op1[1]) = 0x00010001;
+  *((int*)& __m128_op1[0]) = 0x00010001;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00001fff;
+  *((int*)& __m128_op0[2]) = 0x00001fff;
+  *((int*)& __m128_op0[1]) = 0x00000003;
+  *((int*)& __m128_op0[0]) = 0xfffffffc;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0xfffffffc;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x7fc00000;
+  *((int*)& __m128_result[1]) = 0x7fc00000;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vfdiv_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x8a228acac14e440a;
+  *((unsigned long*)& __m128d_op1[0]) = 0xc77c47cdc0f16549;
+  *((unsigned long*)& __m128d_op2[1]) = 0xffffffffd24271c4;
+  *((unsigned long*)& __m128d_op2[0]) = 0x2711bad1e8e309ed;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffd24271c4;
+  *((unsigned long*)& __m128d_result[0]) = 0x2711bad1e8e309ed;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffe000ffff1fff;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00000000003f80b0;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128d_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0080200000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000401000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000080000000000;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x000000000000001e;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m128d_op2[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128d_op2[0]) = 0xfff8000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfffb00fdfdf7ffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff8000000000000;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000009000900;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000009000900;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000009000900;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000009000900;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128d_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x9c83e21a22001818;
+  *((unsigned long*)& __m128d_op0[0]) = 0xdd3b8b02563b2d7b;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7f7f7f007f7f7f00;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x7f7f7f007f7f7f00;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128d_op0[0]) = 0xff01e41ffff0ffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x5555000054100000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x5555000154100155;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vfmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000010;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfe00fcfffe01fd01;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128d_op1[0]) = 0xfe00fd1400010000;
+  *((unsigned long*)& __m128d_op2[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128d_op2[0]) = 0xfe00fcfffe01fd01;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00fffe00fffffe00;
+  *((unsigned long*)& __m128d_op2[1]) = 0x8000008000008080;
+  *((unsigned long*)& __m128d_op2[0]) = 0x8080800000800080;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000008000008080;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00ff80ff00ff80ff;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x000000007ff000ff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000103;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000100000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000103;
+  __m128d_out = __lsx_vfmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xef0179a47c793879;
+  *((unsigned long*)& __m128d_op0[0]) = 0x9f9e7e3e9ea3ff41;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x1e801ffc7fc00000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffc000007fc00000;
+  *((unsigned long*)& __m128d_result[0]) = 0x9e801ffc7fc00000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000ffff00000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000ffff00000000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000008800022;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128d_op2[1]) = 0xb8ec43befe38e64b;
+  *((unsigned long*)& __m128d_op2[0]) = 0x6477d042343cce24;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffbfffffffbf;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000060000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfffffffffffff000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfffffffafffffffa;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffffffafffffffa;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m128d_op1[0]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff80ffa2fff0ff74;
+  *((unsigned long*)& __m128d_op0[0]) = 0xff76ffd8ffe6ffaa;
+  *((unsigned long*)& __m128d_op1[1]) = 0xff80ffa2fff0ff74;
+  *((unsigned long*)& __m128d_op1[0]) = 0xff76ffd8ffe6ffaa;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0303030303030303;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vfnmadd_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0001ffff00000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128d_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x3c600000ff800000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfffffffffffffffe;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x00000000b5207f80;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x00000000b5207f80;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000009000900;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000009000900;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x00000000ffffffff;
+  __m128d_out = __lsx_vfnmsub_d(__m128d_op0,__m128d_op1,__m128d_op2);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000002;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000002;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x0028e0a1;
+  *((int*)& __m128_op0[2]) = 0xa000a041;
+  *((int*)& __m128_op0[1]) = 0x01000041;
+  *((int*)& __m128_op0[0]) = 0x00010001;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x01000001;
+  *((int*)& __m128_op1[1]) = 0x00010001;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x01000001;
+  *((int*)& __m128_op2[1]) = 0xffffe000;
+  *((int*)& __m128_op2[0]) = 0xffff1fff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x01000001;
+  *((int*)& __m128_result[1]) = 0xffffe000;
+  *((int*)& __m128_result[0]) = 0xffff1fff;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x7f800000;
+  *((int*)& __m128_op0[2]) = 0x7f800000;
+  *((int*)& __m128_op0[1]) = 0x7f800000;
+  *((int*)& __m128_op0[0]) = 0x7f800000;
+  *((int*)& __m128_op1[3]) = 0x00000002;
+  *((int*)& __m128_op1[2]) = 0x00000002;
+  *((int*)& __m128_op1[1]) = 0x00000003;
+  *((int*)& __m128_op1[0]) = 0x00000003;
+  *((int*)& __m128_op2[3]) = 0x3fc00000;
+  *((int*)& __m128_op2[2]) = 0x3fc00000;
+  *((int*)& __m128_op2[1]) = 0x3fc00000;
+  *((int*)& __m128_op2[0]) = 0x3fc00000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xc1bdceee;
+  *((int*)& __m128_op0[2]) = 0x242070db;
+  *((int*)& __m128_op0[1]) = 0xe8c7b756;
+  *((int*)& __m128_op0[0]) = 0xd76aa478;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x7f400000;
+  *((int*)& __m128_op0[2]) = 0x7f040000;
+  *((int*)& __m128_op0[1]) = 0x7f020000;
+  *((int*)& __m128_op0[0]) = 0x7f020000;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0x0014002c;
+  *((int*)& __m128_op1[1]) = 0xfffefffe;
+  *((int*)& __m128_op1[0]) = 0x003b0013;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0x3ea5016b;
+  *((int*)& __m128_result[1]) = 0xfffefffe;
+  *((int*)& __m128_result[0]) = 0x3f6fb04d;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x004f0080;
+  *((int*)& __m128_op0[2]) = 0x004f0080;
+  *((int*)& __m128_op0[1]) = 0x004f0080;
+  *((int*)& __m128_op0[0]) = 0x004f0080;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x7fff7fff;
+  *((int*)& __m128_op2[2]) = 0x7fff7fff;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7fff7fff;
+  *((int*)& __m128_result[2]) = 0x7fff7fff;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x3d3d3d3d;
+  *((int*)& __m128_op0[2]) = 0x3d3d3d3d;
+  *((int*)& __m128_op0[1]) = 0x3d3d3d3d;
+  *((int*)& __m128_op0[0]) = 0x3d3d3d3d;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00100000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x0000bd3d;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00050005;
+  *((int*)& __m128_op1[2]) = 0x00050005;
+  *((int*)& __m128_op1[1]) = 0x00050005;
+  *((int*)& __m128_op1[0]) = 0x00050005;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xe500c085;
+  *((int*)& __m128_op0[2]) = 0xc000c005;
+  *((int*)& __m128_op0[1]) = 0xe5c1a185;
+  *((int*)& __m128_op0[0]) = 0xc48004c5;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0xffffc000;
+  *((int*)& __m128_op1[0]) = 0xffffc005;
+  *((int*)& __m128_op2[3]) = 0xff550025;
+  *((int*)& __m128_op2[2]) = 0x002a004b;
+  *((int*)& __m128_op2[1]) = 0x00590013;
+  *((int*)& __m128_op2[0]) = 0x005cffca;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffffc000;
+  *((int*)& __m128_result[0]) = 0xffffc005;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00fe0001;
+  *((int*)& __m128_op1[2]) = 0x00cf005f;
+  *((int*)& __m128_op1[1]) = 0x7fff7fff;
+  *((int*)& __m128_op1[0]) = 0x7fff7f00;
+  *((int*)& __m128_op2[3]) = 0x5d7f5d00;
+  *((int*)& __m128_op2[2]) = 0x7f6a007f;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x5d7f5d00;
+  *((int*)& __m128_result[2]) = 0x7f6a007f;
+  *((int*)& __m128_result[1]) = 0x7fff7fff;
+  *((int*)& __m128_result[0]) = 0x7fff7f00;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00680486;
+  *((int*)& __m128_op0[2]) = 0xffffffda;
+  *((int*)& __m128_op0[1]) = 0xffff913b;
+  *((int*)& __m128_op0[0]) = 0xb9951901;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x01030103;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00200060;
+  *((int*)& __m128_op2[0]) = 0x00200060;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0xffffffda;
+  *((int*)& __m128_result[1]) = 0xffff913b;
+  *((int*)& __m128_result[0]) = 0x001fed4d;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x001a001a;
+  *((int*)& __m128_op0[2]) = 0x001a0008;
+  *((int*)& __m128_op0[1]) = 0x001a001a;
+  *((int*)& __m128_op0[0]) = 0x001a000b;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0xff800001;
+  *((int*)& __m128_op1[0]) = 0x0f800000;
+  *((int*)& __m128_op2[3]) = 0xff800000;
+  *((int*)& __m128_op2[2]) = 0xff800000;
+  *((int*)& __m128_op2[1]) = 0xff800000;
+  *((int*)& __m128_op2[0]) = 0xff800000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffc00001;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xfe3bfb01;
+  *((int*)& __m128_op0[2]) = 0xfe3bfe01;
+  *((int*)& __m128_op0[1]) = 0xfe03fe3f;
+  *((int*)& __m128_op0[0]) = 0xfe01fa21;
+  *((int*)& __m128_op1[3]) = 0xfe3bfb01;
+  *((int*)& __m128_op1[2]) = 0xfe3bfe01;
+  *((int*)& __m128_op1[1]) = 0xfe03fe3f;
+  *((int*)& __m128_op1[0]) = 0xfe01fa21;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffe001;
+  *((int*)& __m128_op0[2]) = 0xffffe001;
+  *((int*)& __m128_op0[1]) = 0xffffe001;
+  *((int*)& __m128_op0[0]) = 0xffffe001;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0xffffe000;
+  *((int*)& __m128_op1[0]) = 0x01ffe200;
+  *((int*)& __m128_op2[3]) = 0x04040383;
+  *((int*)& __m128_op2[2]) = 0x83838404;
+  *((int*)& __m128_op2[1]) = 0x04040383;
+  *((int*)& __m128_op2[0]) = 0x83838404;
+  *((int*)& __m128_result[3]) = 0xffffe001;
+  *((int*)& __m128_result[2]) = 0xffffe001;
+  *((int*)& __m128_result[1]) = 0xffffe001;
+  *((int*)& __m128_result[0]) = 0xffffe001;
+  __m128_out = __lsx_vfmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x18171615;
+  *((int*)& __m128_op0[2]) = 0x17161514;
+  *((int*)& __m128_op0[1]) = 0x16151413;
+  *((int*)& __m128_op0[0]) = 0x151d3756;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x39412488;
+  *((int*)& __m128_op1[0]) = 0x80000000;
+  *((int*)& __m128_op2[3]) = 0x3ff00000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x40f3fa00;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xbff00000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xc0f3fa00;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000005;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x3ddc5dac;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x63636b6a;
+  *((int*)& __m128_op0[2]) = 0xfe486741;
+  *((int*)& __m128_op0[1]) = 0x41f8e880;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0xe3636363;
+  *((int*)& __m128_op1[2]) = 0x63abdf16;
+  *((int*)& __m128_op1[1]) = 0x41f8e080;
+  *((int*)& __m128_op1[0]) = 0x16161198;
+  *((int*)& __m128_op2[3]) = 0x00c27580;
+  *((int*)& __m128_op2[2]) = 0x00bccf42;
+  *((int*)& __m128_op2[1]) = 0x00a975be;
+  *((int*)& __m128_op2[0]) = 0x00accf03;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0x4471fb84;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xfffffffe;
+  *((int*)& __m128_op0[0]) = 0xbe6ed565;
+  *((int*)& __m128_op1[3]) = 0x195f307a;
+  *((int*)& __m128_op1[2]) = 0x5d04acbb;
+  *((int*)& __m128_op1[1]) = 0x6a1a3fbb;
+  *((int*)& __m128_op1[0]) = 0x3c90260e;
+  *((int*)& __m128_op2[3]) = 0xffffffff;
+  *((int*)& __m128_op2[2]) = 0xffffffff;
+  *((int*)& __m128_op2[1]) = 0xfffffffe;
+  *((int*)& __m128_op2[0]) = 0xbe6ed565;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xfffffffe;
+  *((int*)& __m128_result[0]) = 0x3e730941;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xff01ff01;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0xffffffff;
+  *((int*)& __m128_op2[2]) = 0xffffffff;
+  *((int*)& __m128_op2[1]) = 0xffffffff;
+  *((int*)& __m128_op2[0]) = 0xff01ff01;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0x7f01ff01;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0xffffffff;
+  *((int*)& __m128_op2[3]) = 0x00307028;
+  *((int*)& __m128_op2[2]) = 0x003f80b0;
+  *((int*)& __m128_op2[1]) = 0x0040007f;
+  *((int*)& __m128_op2[0]) = 0xff800000;
+  *((int*)& __m128_result[3]) = 0x80307028;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0x8040007f;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000049;
+  *((int*)& __m128_op0[2]) = 0x0000004d;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000001;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000001;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000001;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffff0000;
+  *((int*)& __m128_op0[1]) = 0x00ff0000;
+  *((int*)& __m128_op0[0]) = 0x00ff0000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000800;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0xffffffff;
+  *((int*)& __m128_op2[2]) = 0xfffff800;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xfffff800;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00030000;
+  *((int*)& __m128_op0[2]) = 0x00010000;
+  *((int*)& __m128_op0[1]) = 0x00020000;
+  *((int*)& __m128_op0[0]) = 0x00010000;
+  *((int*)& __m128_op1[3]) = 0x3f800000;
+  *((int*)& __m128_op1[2]) = 0x3f800000;
+  *((int*)& __m128_op1[1]) = 0x3f800000;
+  *((int*)& __m128_op1[0]) = 0x3f800000;
+  *((int*)& __m128_op2[3]) = 0x00030000;
+  *((int*)& __m128_op2[2]) = 0x00010000;
+  *((int*)& __m128_op2[1]) = 0x00020000;
+  *((int*)& __m128_op2[0]) = 0x00010000;
+  *((int*)& __m128_result[3]) = 0x80060000;
+  *((int*)& __m128_result[2]) = 0x80020000;
+  *((int*)& __m128_result[1]) = 0x80040000;
+  *((int*)& __m128_result[0]) = 0x80020000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000008;
+  *((int*)& __m128_op0[2]) = 0x97957687;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000408;
+  *((int*)& __m128_op1[3]) = 0x00000008;
+  *((int*)& __m128_op1[2]) = 0x97957687;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000408;
+  *((int*)& __m128_op2[3]) = 0x00010001;
+  *((int*)& __m128_op2[2]) = 0x00010001;
+  *((int*)& __m128_op2[1]) = 0x00010001;
+  *((int*)& __m128_op2[0]) = 0x04000800;
+  *((int*)& __m128_result[3]) = 0x80010001;
+  *((int*)& __m128_result[2]) = 0x80010001;
+  *((int*)& __m128_result[1]) = 0x80010001;
+  *((int*)& __m128_result[0]) = 0x84000800;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffc2ffe7;
+  *((int*)& __m128_op0[2]) = 0x00000007;
+  *((int*)& __m128_op0[1]) = 0x0000ffc1;
+  *((int*)& __m128_op0[0]) = 0x00010001;
+  *((int*)& __m128_op1[3]) = 0xffc2ffe7;
+  *((int*)& __m128_op1[2]) = 0x00000007;
+  *((int*)& __m128_op1[1]) = 0x0000ffc1;
+  *((int*)& __m128_op1[0]) = 0x00010001;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x000ffc2f;
+  *((int*)& __m128_op2[1]) = 0x00201df0;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffc2ffe7;
+  *((int*)& __m128_result[2]) = 0x800ffc2f;
+  *((int*)& __m128_result[1]) = 0x80201df0;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000005;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x80808080;
+  *((int*)& __m128_op0[2]) = 0x80808080;
+  *((int*)& __m128_op0[1]) = 0x80808080;
+  *((int*)& __m128_op0[0]) = 0x80800008;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x31313131;
+  *((int*)& __m128_op0[0]) = 0x31313131;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x31313131;
+  *((int*)& __m128_op1[0]) = 0x31313131;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000008;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000008;
+  *((int*)& __m128_result[1]) = 0xa2f54a1e;
+  *((int*)& __m128_result[0]) = 0xa2f54a1e;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmadd_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xa486c90f;
+  *((int*)& __m128_op0[2]) = 0x157ca12e;
+  *((int*)& __m128_op0[1]) = 0x58bcc201;
+  *((int*)& __m128_op0[0]) = 0x2e635d65;
+  *((int*)& __m128_op1[3]) = 0x6d564875;
+  *((int*)& __m128_op1[2]) = 0xf8760005;
+  *((int*)& __m128_op1[1]) = 0x8dc5a4d1;
+  *((int*)& __m128_op1[0]) = 0x79ffa22f;
+  *((int*)& __m128_op2[3]) = 0xffffffff;
+  *((int*)& __m128_op2[2]) = 0xd2436487;
+  *((int*)& __m128_op2[1]) = 0x0fa96b88;
+  *((int*)& __m128_op2[0]) = 0x5f94ab13;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xd24271c4;
+  *((int*)& __m128_result[1]) = 0x2711bad1;
+  *((int*)& __m128_result[0]) = 0xe8e309ed;
+  __m128_out = __lsx_vfnmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x00000000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x007ffd00;
+  *((int*)& __m128_op2[0]) = 0x01400840;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x007ffd00;
+  *((int*)& __m128_result[0]) = 0x01400840;
+  __m128_out = __lsx_vfnmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0x00000000;
+  *((int*)& __m128_op2[2]) = 0x00000000;
+  *((int*)& __m128_op2[1]) = 0x7f800000;
+  *((int*)& __m128_op2[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x80000000;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfnmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_op2[3]) = 0xcd636363;
+  *((int*)& __m128_op2[2]) = 0xcd636363;
+  *((int*)& __m128_op2[1]) = 0xcd636363;
+  *((int*)& __m128_op2[0]) = 0xcd636363;
+  *((int*)& __m128_result[3]) = 0xcd636363;
+  *((int*)& __m128_result[2]) = 0xcd636363;
+  *((int*)& __m128_result[1]) = 0xcd636363;
+  *((int*)& __m128_result[0]) = 0xcd636363;
+  __m128_out = __lsx_vfnmsub_s(__m128_op0,__m128_op1,__m128_op2);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x0000ffff;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x0000ffff;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0xc2409eda;
+  *((int*)& __m128_op1[2]) = 0xb019323f;
+  *((int*)& __m128_op1[1]) = 0x460f3b39;
+  *((int*)& __m128_op1[0]) = 0x3ef4be3a;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x460f3b39;
+  *((int*)& __m128_result[0]) = 0x3ef4be3a;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000001;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000001;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000001;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0xfefd7f7f;
+  *((int*)& __m128_op1[2]) = 0x7f7f7f7e;
+  *((int*)& __m128_op1[1]) = 0xdffdbffe;
+  *((int*)& __m128_op1[0]) = 0xba6f5543;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x7f7f7f7e;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xff84fff4;
+  *((int*)& __m128_op0[2]) = 0xff84fff4;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xfffffff0;
+  *((int*)& __m128_op1[3]) = 0xff84fff4;
+  *((int*)& __m128_op1[2]) = 0xff84fff4;
+  *((int*)& __m128_op1[1]) = 0xffffffff;
+  *((int*)& __m128_op1[0]) = 0xfffffff0;
+  *((int*)& __m128_result[3]) = 0xffc4fff4;
+  *((int*)& __m128_result[2]) = 0xffc4fff4;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xfffffff0;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00007fff;
+  *((int*)& __m128_op1[2]) = 0x00007fff;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00007fff;
+  *((int*)& __m128_result[2]) = 0x00007fff;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000001;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0xffffffff;
+  *((int*)& __m128_op1[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000001;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x01010001;
+  *((int*)& __m128_op0[0]) = 0x01010001;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00020000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00020000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00020000;
+  *((int*)& __m128_result[1]) = 0x01010001;
+  *((int*)& __m128_result[0]) = 0x01010001;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000020;
+  *((int*)& __m128_op1[2]) = 0x00000020;
+  *((int*)& __m128_op1[1]) = 0x0000001f;
+  *((int*)& __m128_op1[0]) = 0x0000001f;
+  *((int*)& __m128_result[3]) = 0x00000020;
+  *((int*)& __m128_result[2]) = 0x00000020;
+  *((int*)& __m128_result[1]) = 0x0000001f;
+  *((int*)& __m128_result[0]) = 0x0000001f;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xf3040705;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0xf3040705;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0xf3040705;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000004;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000004;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000004;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000004;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000004;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000004;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmax_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000ffff;
+  *((int*)& __m128_op0[2]) = 0x0000ffff;
+  *((int*)& __m128_op0[1]) = 0x0000ffff;
+  *((int*)& __m128_op0[0]) = 0x0000fffe;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffe5;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffe5;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x13121110;
+  *((int*)& __m128_op0[2]) = 0x1211100f;
+  *((int*)& __m128_op0[1]) = 0x11100f0e;
+  *((int*)& __m128_op0[0]) = 0x100f0e0d;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xfffffff3;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000008;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000088;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000008;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000088;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x52525252;
+  *((int*)& __m128_op0[2]) = 0xadadadad;
+  *((int*)& __m128_op0[1]) = 0x52525252;
+  *((int*)& __m128_op0[0]) = 0xadadadad;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0xadadadad;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0xadadadad;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x0000ffff;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x0000ffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x0000ffff;
+  __m128_out = __lsx_vfmin_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m128d_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128d_result[0]) = 0x0400040004000400;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m128d_result[0]) = 0x01ff01ff01ff01ff;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128d_result[0]) = 0xfffcfffcfffcfffc;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x000000000000ffff;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128d_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128d_op1[0]) = 0xfdfef9ff0efff900;
+  *((unsigned long*)& __m128d_result[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128d_result[0]) = 0x6363636363636363;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128d_op0[0]) = 0xa352bfac9269e0aa;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128d_op1[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128d_result[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128d_result[0]) = 0x377b810912c0e000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128d_op0[0]) = 0xc3818bffe7b7a7b8;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x17c64aaef639f093;
+  *((unsigned long*)& __m128d_op0[0]) = 0xdb8f439722ec502d;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x17c64aaef639f093;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128d_result[0]) = 0x00000000ff800000;
+  __m128d_out = __lsx_vfmax_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000c000ffffc000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000958affff995d;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x1748c4f9ed1a5870;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmin_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xf436f3f5;
+  *((int*)& __m128_op0[0]) = 0x2f4ef4a8;
+  *((int*)& __m128_op1[3]) = 0xff800000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0xff800000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0x2f4ef4a8;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000800;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000800;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000800;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000800;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xc0c0c000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00800080;
+  *((int*)& __m128_op1[2]) = 0x00800080;
+  *((int*)& __m128_op1[1]) = 0x0080006b;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00800080;
+  *((int*)& __m128_result[2]) = 0xc0c0c000;
+  *((int*)& __m128_result[1]) = 0x0080006b;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x80000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x80000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmaxa_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0xffffffff;
+  *((int*)& __m128_op1[2]) = 0xffffffff;
+  *((int*)& __m128_op1[1]) = 0xffffffff;
+  *((int*)& __m128_op1[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0xff01ff01;
+  *((int*)& __m128_op1[2]) = 0x0000ff7d;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x0000fffc;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xdfa6e0c6;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0xd46cdc13;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x01010101;
+  *((int*)& __m128_op0[2]) = 0x01010101;
+  *((int*)& __m128_op0[1]) = 0x010101fe;
+  *((int*)& __m128_op0[0]) = 0x0101fe87;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0xffff0000;
+  *((int*)& __m128_op1[2]) = 0xffff0000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfmina_s(__m128_op0,__m128_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000800000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128d_op0[0]) = 0x3918371635143312;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00000af555555555;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000af555555555;
+  *((unsigned long*)& __m128d_result[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128d_result[0]) = 0x3918371635143312;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000010000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x10f8000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfff8ffa2fffdffb0;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128d_result[1]) = 0x10f8000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x00000000ff800000;
+  __m128d_out = __lsx_vfmaxa_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmina_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x80000000fff6fc00;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000080000000;
+  __m128d_out = __lsx_vfmina_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmina_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000158;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmina_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0xfffe0004fffe0004;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmina_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x002a001a001a000b;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x002a001a001a000b;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfmina_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00003004;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xc3080000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x01010101;
+  *((int*)& __m128_op0[2]) = 0x01010101;
+  *((int*)& __m128_op0[1]) = 0x01010101;
+  *((int*)& __m128_op0[0]) = 0x01010101;
+  *((int*)& __m128_result[3]) = 0xc2fa0000;
+  *((int*)& __m128_result[2]) = 0xc2fa0000;
+  *((int*)& __m128_result[1]) = 0xc2fa0000;
+  *((int*)& __m128_result[0]) = 0xc2fa0000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x01ff01ff;
+  *((int*)& __m128_op0[2]) = 0x01ff01ff;
+  *((int*)& __m128_op0[1]) = 0x01ff01ff;
+  *((int*)& __m128_op0[0]) = 0x01ff01ff;
+  *((int*)& __m128_result[3]) = 0xc2f80000;
+  *((int*)& __m128_result[2]) = 0xc2f80000;
+  *((int*)& __m128_result[1]) = 0xc2f80000;
+  *((int*)& __m128_result[0]) = 0xc2f80000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0xd46cdc13;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00fe00fe;
+  *((int*)& __m128_op0[2]) = 0x000200fe;
+  *((int*)& __m128_op0[1]) = 0x00fe00fe;
+  *((int*)& __m128_op0[0]) = 0x000200fe;
+  *((int*)& __m128_result[3]) = 0xc2fc0000;
+  *((int*)& __m128_result[2]) = 0xc3040000;
+  *((int*)& __m128_result[1]) = 0xc2fc0000;
+  *((int*)& __m128_result[0]) = 0xc3040000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x01010101;
+  *((int*)& __m128_op0[0]) = 0x00000100;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xc2fa0000;
+  *((int*)& __m128_result[0]) = 0xc30d0000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000014;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000014;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xc3110000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xc3110000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x4e3e1337;
+  *((int*)& __m128_op0[0]) = 0x38bb47d2;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0x41e80000;
+  *((int*)& __m128_result[0]) = 0xc1600000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00003ff8;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0xff800000;
+  *((int*)& __m128_result[1]) = 0xff800000;
+  *((int*)& __m128_result[0]) = 0xc3080000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xf1f181a2;
+  *((int*)& __m128_op0[2]) = 0xf1f1f1b0;
+  *((int*)& __m128_op0[1]) = 0xf1f1f1f1;
+  *((int*)& __m128_op0[0]) = 0xf180f1f1;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x7fc00000;
+  *((int*)& __m128_result[1]) = 0x7fc00000;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vflogb_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xc090c40000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfff0000000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000001000000048;
+  *((unsigned long*)& __m128d_result[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128d_result[0]) = 0xc090380000000000;
+  __m128d_out = __lsx_vflogb_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x7fff8000;
+  *((int*)& __m128_op0[1]) = 0x00010081;
+  *((int*)& __m128_op0[0]) = 0x00000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000020000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000100;
+  __m128i_out = __lsx_vfclass_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xfe02fe02;
+  *((int*)& __m128_op0[2]) = 0xfe02fe02;
+  *((int*)& __m128_op0[1]) = 0xfe02fe02;
+  *((int*)& __m128_op0[0]) = 0xfe02fe02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000800000008;
+  __m128i_out = __lsx_vfclass_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000000c;
+  *((int*)& __m128_op0[2]) = 0x7fff000c;
+  *((int*)& __m128_op0[1]) = 0x10001000;
+  *((int*)& __m128_op0[0]) = 0x10001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000008000000080;
+  __m128i_out = __lsx_vfclass_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000020000000200;
+  __m128i_out = __lsx_vfclass_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000020000000200;
+  __m128i_out = __lsx_vfclass_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x0c0b0a09;
+  *((int*)& __m128_op0[2]) = 0x0b0a0908;
+  *((int*)& __m128_op0[1]) = 0x0a090807;
+  *((int*)& __m128_op0[0]) = 0x09080706;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000008000000080;
+  __m128i_out = __lsx_vfclass_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128d_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000008;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x14ccc6320176a4d2;
+  *((unsigned long*)& __m128d_op0[0]) = 0x685670d37e80682a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000080;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vfclass_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xfe07e5fe;
+  *((int*)& __m128_op0[2]) = 0xfefdddfe;
+  *((int*)& __m128_op0[1]) = 0x00020100;
+  *((int*)& __m128_op0[0]) = 0xfedd0c00;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x7fc00000;
+  *((int*)& __m128_result[1]) = 0x1e801ffc;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xff00ff00;
+  *((int*)& __m128_op0[2]) = 0xff00ff00;
+  *((int*)& __m128_op0[1]) = 0xff00ff00;
+  *((int*)& __m128_op0[0]) = 0xff00ff00;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x7fc00000;
+  *((int*)& __m128_result[1]) = 0x7fc00000;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x8c7fc73a;
+  *((int*)& __m128_op0[2]) = 0x137e54af;
+  *((int*)& __m128_op0[1]) = 0xbc84cf6f;
+  *((int*)& __m128_op0[0]) = 0x76208329;
+  *((int*)& __m128_result[3]) = 0x7fc00000;
+  *((int*)& __m128_result[2]) = 0x297f29fe;
+  *((int*)& __m128_result[1]) = 0x7fc00000;
+  *((int*)& __m128_result[0]) = 0x5acab5a5;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffff9727;
+  *((int*)& __m128_op0[2]) = 0xffff9727;
+  *((int*)& __m128_op0[1]) = 0xfffffe79;
+  *((int*)& __m128_op0[0]) = 0xffffba5f;
+  *((int*)& __m128_result[3]) = 0xffff9727;
+  *((int*)& __m128_result[2]) = 0xffff9727;
+  *((int*)& __m128_result[1]) = 0xfffffe79;
+  *((int*)& __m128_result[0]) = 0xffffba5f;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xfff8fff8;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0xfff80000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0xfff8fff8;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0xfff80000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x1f1b917c;
+  *((int*)& __m128_op0[0]) = 0x9f3d5e05;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x4fa432d6;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x12835580;
+  *((int*)& __m128_op0[0]) = 0xb880eb98;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0x55fcbad1;
+  *((int*)& __m128_result[0]) = 0x7fc00000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x06070607;
+  *((int*)& __m128_op0[2]) = 0x00000807;
+  *((int*)& __m128_op0[1]) = 0x0707f8f8;
+  *((int*)& __m128_op0[0]) = 0x03e8157e;
+  *((int*)& __m128_result[3]) = 0x5c303f97;
+  *((int*)& __m128_result[2]) = 0x61ff9049;
+  *((int*)& __m128_result[1]) = 0x5bafa1dd;
+  *((int*)& __m128_result[0]) = 0x5d3e1e1d;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xfff7fffe;
+  *((int*)& __m128_op0[2]) = 0xfffa01ff;
+  *((int*)& __m128_op0[1]) = 0xfffbfffe;
+  *((int*)& __m128_op0[0]) = 0xfffe01ff;
+  *((int*)& __m128_result[3]) = 0xfff7fffe;
+  *((int*)& __m128_result[2]) = 0xfffa01ff;
+  *((int*)& __m128_result[1]) = 0xfffbfffe;
+  *((int*)& __m128_result[0]) = 0xfffe01ff;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x45000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x44000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x3cb504f3;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x3d3504f3;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00020001;
+  *((int*)& __m128_op0[0]) = 0x00020002;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x607fffc0;
+  *((int*)& __m128_result[0]) = 0x607fff80;
+  __m128_out = __lsx_vfrsqrt_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000002;
+  *((int*)& __m128_op0[2]) = 0x00000002;
+  *((int*)& __m128_op0[1]) = 0x00000003;
+  *((int*)& __m128_op0[0]) = 0x00000003;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xf6e91c00;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x51cfd7c0;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x880c91b8;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x2d1da85b;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xfffffffa;
+  *((int*)& __m128_op0[2]) = 0xfffffffa;
+  *((int*)& __m128_op0[1]) = 0xfffffffa;
+  *((int*)& __m128_op0[0]) = 0xfffffffa;
+  *((int*)& __m128_result[3]) = 0xfffffffa;
+  *((int*)& __m128_result[2]) = 0xfffffffa;
+  *((int*)& __m128_result[1]) = 0xfffffffa;
+  *((int*)& __m128_result[0]) = 0xfffffffa;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffff0001;
+  *((int*)& __m128_op0[2]) = 0xffff0001;
+  *((int*)& __m128_op0[1]) = 0xffff0001;
+  *((int*)& __m128_op0[0]) = 0xffff0001;
+  *((int*)& __m128_result[3]) = 0xffff0001;
+  *((int*)& __m128_result[2]) = 0xffff0001;
+  *((int*)& __m128_result[1]) = 0xffff0001;
+  *((int*)& __m128_result[0]) = 0xffff0001;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x0a000000;
+  *((int*)& __m128_op0[2]) = 0x0a000000;
+  *((int*)& __m128_op0[1]) = 0x0a000000;
+  *((int*)& __m128_op0[0]) = 0x0a000000;
+  *((int*)& __m128_result[3]) = 0x75000000;
+  *((int*)& __m128_result[2]) = 0x75000000;
+  *((int*)& __m128_result[1]) = 0x75000000;
+  *((int*)& __m128_result[0]) = 0x75000000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x7f800000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfrecip_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffa486c90f;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000058bcc201;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffa486c90f;
+  *((unsigned long*)& __m128d_result[0]) = 0x1f52d710bf295626;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffff01ff01;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000be00be;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x1f1b917c9f3d5e05;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000001400000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x1f81e3779b97f4a8;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff8000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128d_op0[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128d_result[1]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128d_result[0]) = 0x2006454690d3de87;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128d_op0[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff8000000000000;
+  __m128d_out = __lsx_vfsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff8000000000000;
+  __m128d_out = __lsx_vfrsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001ffff00000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x5ff6a0a40ea8f47c;
+  *((unsigned long*)& __m128d_result[0]) = 0x5ff6a0a40e9da42a;
+  __m128d_out = __lsx_vfrsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x61608654a2d4f6da;
+  __m128d_out = __lsx_vfrsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00fe000100cf005f;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128d_result[1]) = 0x5f675e96e29a5a60;
+  *((unsigned long*)& __m128d_result[0]) = 0x7fff7fff7fff7fff;
+  __m128d_out = __lsx_vfrsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrsqrt_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff8000000000000;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000fffa0000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000fffa0000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xe593c8c4e593c8c4;
+  *((unsigned long*)& __m128d_result[1]) = 0x805ffffe01001fe0;
+  *((unsigned long*)& __m128d_result[0]) = 0x9a49e11102834d70;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128d_op0[0]) = 0x5252dcdcdcdcdcdc;
+  *((unsigned long*)& __m128d_result[1]) = 0x2d8bf1f8fc7e3f20;
+  *((unsigned long*)& __m128d_result[0]) = 0x2d8b24b936d1b24d;
+  __m128d_out = __lsx_vfrecip_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-cvt.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-cvt.c
new file mode 100644
index 00000000000..5f2849a9d2f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-fp-cvt.c
@@ -0,0 +1,4114 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00e0000000e00000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002a55005501;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002a55000001;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x36280000;
+  *((int*)& __m128_result[1]) = 0x42a00000;
+  *((int*)& __m128_result[0]) = 0x42a02000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xf436f3f5;
+  *((int*)& __m128_op0[0]) = 0x2f4ef4a8;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffcfb799f1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0282800002828282;
+  *((int*)& __m128_result[3]) = 0xffffe000;
+  *((int*)& __m128_result[2]) = 0xffffe000;
+  *((int*)& __m128_result[1]) = 0xc1f6e000;
+  *((int*)& __m128_result[0]) = 0xbb3e2000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000040004000100;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x36de0000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x3be14000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x41dfffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x403be000;
+  *((int*)& __m128_result[2]) = 0xffffe000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x63637687;
+  *((int*)& __m128_op0[2]) = 0x636316bb;
+  *((int*)& __m128_op0[1]) = 0x63636363;
+  *((int*)& __m128_op0[0]) = 0x63636363;
+  *((unsigned long*)& __m128d_result[1]) = 0x446c6ed0e0000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x446c62d760000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x000000ff;
+  *((int*)& __m128_op0[2]) = 0x000000ff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x371fe00000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x371fe00000000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff7fff7ef;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80808080ffffffff;
+  *((int*)& __m128_result[3]) = 0xffffe000;
+  *((int*)& __m128_result[2]) = 0xffffe000;
+  *((int*)& __m128_result[1]) = 0xc6ffe000;
+  *((int*)& __m128_result[0]) = 0xc6fde000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffe0000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffe1ffc100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000400000;
+  *((int*)& __m128_result[3]) = 0xfffc2000;
+  *((int*)& __m128_result[2]) = 0xfff82000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000b3a6;
+  *((int*)& __m128_op0[2]) = 0x000067da;
+  *((int*)& __m128_op0[1]) = 0x00004e42;
+  *((int*)& __m128_op0[0]) = 0x0000c26a;
+  *((unsigned long*)& __m128d_result[1]) = 0x379674c000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x3789f68000000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0xffff0000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffe0000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001001001000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4195d926d8018000;
+  *((int*)& __m128_result[3]) = 0x33800000;
+  *((int*)& __m128_result[2]) = 0x35800000;
+  *((int*)& __m128_result[1]) = 0x37800000;
+  *((int*)& __m128_result[0]) = 0x37000000;
+  __m128_out = __lsx_vfcvth_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvth_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((int*)& __m128_result[3]) = 0xffffe000;
+  *((int*)& __m128_result[2]) = 0xffffe000;
+  *((int*)& __m128_result[1]) = 0xffffe000;
+  *((int*)& __m128_result[0]) = 0xffffe000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((int*)& __m128_result[3]) = 0x35200000;
+  *((int*)& __m128_result[2]) = 0x35200000;
+  *((int*)& __m128_result[1]) = 0x35200000;
+  *((int*)& __m128_result[0]) = 0x35200000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000100;
+  *((int*)& __m128_op0[2]) = 0x0f00fe00;
+  *((int*)& __m128_op0[1]) = 0x0000017f;
+  *((int*)& __m128_op0[0]) = 0xff00fe7f;
+  *((unsigned long*)& __m128d_result[1]) = 0x3727f00000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xc7e01fcfe0000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000020;
+  *((int*)& __m128_op0[0]) = 0x00000020;
+  *((unsigned long*)& __m128d_result[1]) = 0x36f0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x36f0000000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xbd994889;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x0a092444;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x3941248880000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x62cbf96e4acfaf40;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf0bc9a5278285a4a;
+  *((int*)& __m128_result[3]) = 0xc6178000;
+  *((int*)& __m128_result[2]) = 0xbb4a4000;
+  *((int*)& __m128_result[1]) = 0x47050000;
+  *((int*)& __m128_result[0]) = 0x43494000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00040004;
+  *((int*)& __m128_op0[2]) = 0x00040004;
+  *((int*)& __m128_op0[1]) = 0x00040004;
+  *((int*)& __m128_op0[0]) = 0x00040004;
+  *((unsigned long*)& __m128d_result[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x37c0001000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((int*)& __m128_result[3]) = 0xffe00000;
+  *((int*)& __m128_result[2]) = 0xffe00000;
+  *((int*)& __m128_result[1]) = 0xffe00000;
+  *((int*)& __m128_result[0]) = 0xffe00000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xffffe000;
+  *((int*)& __m128_result[0]) = 0xffffe000;
+  __m128_out = __lsx_vfcvtl_s_h(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x007f7f7f;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x380fdfdfc0000000;
+  __m128d_out = __lsx_vfcvtl_d_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x004200a0;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x004200a0;
+  *((int*)& __m128_op0[0]) = 0x00200001;
+  *((int*)& __m128_op1[3]) = 0x004200a0;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x004200a0;
+  *((int*)& __m128_op1[0]) = 0x00200000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00010001;
+  *((int*)& __m128_op1[2]) = 0x0001007c;
+  *((int*)& __m128_op1[1]) = 0x00010001;
+  *((int*)& __m128_op1[0]) = 0x00010001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x80808080;
+  *((int*)& __m128_op1[2]) = 0x80808080;
+  *((int*)& __m128_op1[1]) = 0x80808080;
+  *((int*)& __m128_op1[0]) = 0x80808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000800080008000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_op1[3]) = 0x00000000;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xfffffffc;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xfffffffc;
+  *((int*)& __m128_op1[3]) = 0x00000001;
+  *((int*)& __m128_op1[2]) = 0x00000000;
+  *((int*)& __m128_op1[1]) = 0x00000000;
+  *((int*)& __m128_op1[0]) = 0x00000103;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfcvt_h_s(__m128_op0,__m128_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000049000000c0;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000ffffff29;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000100000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7ff0000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x7f800000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000002c002400;
+  *((unsigned long*)& __m128d_op1[1]) = 0x7ef400ad21fc7081;
+  *((unsigned long*)& __m128d_op1[0]) = 0x28bf0351ec69b5f2;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x7f800000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000dc300003ffb;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000dc300003ffb;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000ffff3fbfffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7fffffff7fffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x7ffffffb;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xbba0c07b51230d5c;
+  *((unsigned long*)& __m128d_op0[0]) = 0xa15f3f9e8763c2b9;
+  *((unsigned long*)& __m128d_op1[1]) = 0xbba0c07b51230d5c;
+  *((unsigned long*)& __m128d_op1[0]) = 0xa15f3f9e8763c2b9;
+  *((int*)& __m128_result[3]) = 0x9d0603db;
+  *((int*)& __m128_result[2]) = 0x80000000;
+  *((int*)& __m128_result[1]) = 0x9d0603db;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128d_op1[1]) = 0x8101010181010101;
+  *((unsigned long*)& __m128d_op1[0]) = 0x8101010181010101;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x80000000;
+  *((int*)& __m128_result[0]) = 0x80000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffc00000ff800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((int*)& __m128_result[3]) = 0xff800000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffdfffe80008000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0xffeffff4;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x7f800000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000090;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000090;
+  *((unsigned long*)& __m128d_op1[1]) = 0x004eff6200d2ff76;
+  *((unsigned long*)& __m128d_op1[0]) = 0xff70002800be00a0;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0xff800000;
+  __m128_out = __lsx_vfcvt_s_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00130013;
+  *((int*)& __m128_op0[2]) = 0x00130013;
+  *((int*)& __m128_op0[1]) = 0x00130013;
+  *((int*)& __m128_op0[0]) = 0x00130013;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x20202020;
+  *((int*)& __m128_op0[2]) = 0x20202020;
+  *((int*)& __m128_op0[1]) = 0x20202020;
+  *((int*)& __m128_op0[0]) = 0x20207fff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x01f50000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000001;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00020004;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xfffbfffb;
+  *((int*)& __m128_op0[2]) = 0xfffbfffb;
+  *((int*)& __m128_op0[1]) = 0xfffbfffb;
+  *((int*)& __m128_op0[0]) = 0xfffbfffb;
+  *((int*)& __m128_result[3]) = 0xfffbfffb;
+  *((int*)& __m128_result[2]) = 0xfffbfffb;
+  *((int*)& __m128_result[1]) = 0xfffbfffb;
+  *((int*)& __m128_result[0]) = 0xfffbfffb;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x0ff780a1;
+  *((int*)& __m128_op0[2]) = 0x0efc01af;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0xfe7f0000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0xfe7f0000;
+  __m128_out = __lsx_vfrintrne_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xefffffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0xefffffff;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffff00;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffff00;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffff00;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffff00;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffb96b;
+  *((int*)& __m128_op0[2]) = 0xffff57c9;
+  *((int*)& __m128_op0[1]) = 0xffff6080;
+  *((int*)& __m128_op0[0]) = 0xffff4417;
+  *((int*)& __m128_result[3]) = 0xffffb96b;
+  *((int*)& __m128_result[2]) = 0xffff57c9;
+  *((int*)& __m128_result[1]) = 0xffff6080;
+  *((int*)& __m128_result[0]) = 0xffff4417;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00ff00ff;
+  *((int*)& __m128_op0[2]) = 0x00ff00ff;
+  *((int*)& __m128_op0[1]) = 0x62cbf96e;
+  *((int*)& __m128_op0[0]) = 0x4acfaf40;
+  *((int*)& __m128_result[3]) = 0x3f800000;
+  *((int*)& __m128_result[2]) = 0x3f800000;
+  *((int*)& __m128_result[1]) = 0x62cbf96e;
+  *((int*)& __m128_result[0]) = 0x4acfaf40;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00002000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x1fe02000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x3f800000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x3f800000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0xffffffff;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffffffff;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x63636363;
+  *((int*)& __m128_op0[2]) = 0x63abdf16;
+  *((int*)& __m128_op0[1]) = 0x41f8e080;
+  *((int*)& __m128_op0[0]) = 0x16161198;
+  *((int*)& __m128_result[3]) = 0x63636363;
+  *((int*)& __m128_result[2]) = 0x63abdf16;
+  *((int*)& __m128_result[1]) = 0x42000000;
+  *((int*)& __m128_result[0]) = 0x3f800000;
+  __m128_out = __lsx_vfrintrp_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrm_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xa5c4c774;
+  *((int*)& __m128_op0[2]) = 0x856ba83b;
+  *((int*)& __m128_op0[1]) = 0x8003caef;
+  *((int*)& __m128_op0[0]) = 0x54691124;
+  *((int*)& __m128_result[3]) = 0xbf800000;
+  *((int*)& __m128_result[2]) = 0xbf800000;
+  *((int*)& __m128_result[1]) = 0xbf800000;
+  *((int*)& __m128_result[0]) = 0x54691124;
+  __m128_out = __lsx_vfrintrm_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00010002;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xff960015;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffd60015;
+  __m128_out = __lsx_vfrintrm_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrm_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrm_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0x3c992b2e;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffff730f;
+  *((int*)& __m128_result[3]) = 0xffffffff;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xffffffff;
+  *((int*)& __m128_result[0]) = 0xffff730f;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000001;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000016;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x18171615;
+  *((int*)& __m128_op0[2]) = 0x17161514;
+  *((int*)& __m128_op0[1]) = 0x16151413;
+  *((int*)& __m128_op0[0]) = 0x15141312;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x62cbf96e;
+  *((int*)& __m128_op0[2]) = 0x4acfaf40;
+  *((int*)& __m128_op0[1]) = 0xf0bc9a52;
+  *((int*)& __m128_op0[0]) = 0x78285a4a;
+  *((int*)& __m128_result[3]) = 0x62cbf96e;
+  *((int*)& __m128_result[2]) = 0x4acfaf40;
+  *((int*)& __m128_result[1]) = 0xf0bc9a52;
+  *((int*)& __m128_result[0]) = 0x78285a4a;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrintrz_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrint_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0003000700020005;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrint_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrint_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x7ff0000000000000;
+  __m128d_out = __lsx_vfrint_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00ff000100ff00fe;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00ff003000ff00a0;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrint_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfd200ed2fd370775;
+  *((unsigned long*)& __m128d_op0[0]) = 0x96198318780e32c5;
+  *((unsigned long*)& __m128d_result[1]) = 0xfd200ed2fd370775;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfrint_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128d_op0[0]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128d_result[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128d_result[0]) = 0xe0404041e0404041;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000868686868686;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrne_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffc002000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xfffc002000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x9c9c9c9c00000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000007f00ff00ff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128d_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x3ff0000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000077af9450;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x3ff0000000000000;
+  __m128d_out = __lsx_vfrintrp_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff02ff1bff02ff23;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000ffffff02fff4;
+  *((unsigned long*)& __m128d_result[1]) = 0xff02ff1bff02ff23;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128d_op0[0]) = 0x6a57a30ff0000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x6a57a30ff0000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[0]) = 0xffffffffffffffff;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffff02000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x1f81e3779b97f4a8;
+  *((unsigned long*)& __m128d_result[1]) = 0xffffffff02000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrm_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x98ff98ff220e220d;
+  *((unsigned long*)& __m128d_op0[0]) = 0xa2e1a2601ff01ff0;
+  *((unsigned long*)& __m128d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x8000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000abba7980;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000ccf98000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128d_result[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128d_result[0]) = 0xfe03fe3ffe01fa21;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x5847b72626ce61ef;
+  *((unsigned long*)& __m128d_op0[0]) = 0x110053f401e7cced;
+  *((unsigned long*)& __m128d_result[1]) = 0x5847b72626ce61ef;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vfrintrz_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((int*)& __m128_op0[3]) = 0x00100010;
+  *((int*)& __m128_op0[2]) = 0x00030000;
+  *((int*)& __m128_op0[1]) = 0x00060002;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000001;
+  *((int*)& __m128_op0[2]) = 0xca02f854;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0x00013fa0;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0xca02f854;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x000000ad;
+  *((int*)& __m128_op0[2]) = 0x00007081;
+  *((int*)& __m128_op0[1]) = 0x00000351;
+  *((int*)& __m128_op0[0]) = 0x0000b5f2;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x00ff00ef;
+  *((int*)& __m128_op0[2]) = 0x00ff010f;
+  *((int*)& __m128_op0[1]) = 0x00ff00ff;
+  *((int*)& __m128_op0[0]) = 0x00ff010f;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vfrint_s(__m128_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+  *((int*)& __m128_op0[2]) = 0x000047cd;
+  *((int*)& __m128_op0[1]) = 0x0000c0f1;
+  *((int*)& __m128_op0[0]) = 0x00006549;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128d_op1[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000ffff;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x0000ffff;
+  *((int*)& __m128_op0[0]) = 0x0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000001;
+  *((int*)& __m128_op0[2]) = 0xfffffffe;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0xfffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00040100;
+  *((int*)& __m128_op0[1]) = 0x00010001;
+  *((int*)& __m128_op0[0]) = 0x00010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128d_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffff00000080;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000001;
+  *((int*)& __m128_op0[2]) = 0xfffffffe;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0xfffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000e0180000e810;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000f0080000f800;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000e0180000e810;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000f0080000f800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffd30000;
+  *((int*)& __m128_op0[2]) = 0x00130000;
+  *((int*)& __m128_op0[1]) = 0xffd30000;
+  *((int*)& __m128_op0[0]) = 0x00130000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xe1000000;
+  *((int*)& __m128_op0[2]) = 0x4deb2610;
+  *((int*)& __m128_op0[1]) = 0xe101e001;
+  *((int*)& __m128_op0[0]) = 0x4dec4089;
+  *((unsigned long*)& __m128i_result[1]) = 0x800000001d64c200;
+  *((unsigned long*)& __m128i_result[0]) = 0x800000001d881120;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x76f42488;
+  *((int*)& __m128_op0[0]) = 0x80000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff00000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xfffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0x00000001;
+  *((int*)& __m128_op0[1]) = 0xffffffee;
+  *((int*)& __m128_op0[0]) = 0x00000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x0000001f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x0000ffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0202f5f80000ff00;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x63636363;
+  *((int*)& __m128_op0[2]) = 0x63636363;
+  *((int*)& __m128_op0[1]) = 0x63636363;
+  *((int*)& __m128_op0[0]) = 0x63636363;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vftint_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x003fffc0;
+  *((int*)& __m128_op0[2]) = 0xffc0003f;
+  *((int*)& __m128_op0[1]) = 0xffc0ffc0;
+  *((int*)& __m128_op0[0]) = 0x003f003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffff7fffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffff8000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x42652524;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000003900000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xff00ff7f;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0x7f800000;
+  *((int*)& __m128_op0[1]) = 0x2d1da85b;
+  *((int*)& __m128_op0[0]) = 0x7f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fffffff;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x80307028;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x8040007f;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128d_op1[1]) = 0x000000004fc04f81;
+  *((unsigned long*)& __m128d_op1[0]) = 0x000000004fc04f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000000000001f;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000003a0000003a;
+  *((unsigned long*)& __m128d_op1[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000068;
+  *((unsigned long*)& __m128d_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128d_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x4429146a7b4c88b2;
+  *((unsigned long*)& __m128d_op0[0]) = 0xe22b3595efa4aa0c;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000400000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000fffffff5;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0xe7e5560400010001;
+  *((unsigned long*)& __m128d_op1[0]) = 0xe7e5dabf00010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x03050302;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x03010302;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000600007fff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000008ffffa209;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x046a09ec009c0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x000aa822;
+  *((int*)& __m128_op0[2]) = 0xa79308f6;
+  *((int*)& __m128_op0[1]) = 0x03aa355e;
+  *((int*)& __m128_op0[0]) = 0x1d37b5a1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffff00;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00001802;
+  *((int*)& __m128_op0[0]) = 0x041b0013;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x004200a000200000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0001000101fd01fe;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xff80ffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x7ffffffe;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0101080408040804;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0804080407040804;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0101080408040804;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0804080407040804;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00010001;
+  *((int*)& __m128_op0[2]) = 0x00010001;
+  *((int*)& __m128_op0[1]) = 0x00010001;
+  *((int*)& __m128_op0[0]) = 0x00010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000003ffda00f3;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000003ffda00f3;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xfffffadf;
+  *((int*)& __m128_op0[2]) = 0xfedbfefe;
+  *((int*)& __m128_op0[1]) = 0x5f5f7bfe;
+  *((int*)& __m128_op0[0]) = 0xdefb5ada;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff80000000;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffa6ff91fdd8ef77;
+  *((unsigned long*)& __m128d_op0[0]) = 0x061202bffb141c38;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfefffffffed08f77;
+  *((unsigned long*)& __m128d_op1[0]) = 0x8160cdd2f365ed0d;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000001;
+  *((int*)& __m128_op0[2]) = 0x084314a6;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0x084314a6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x3f413f4100000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000017fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128d_op1[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128d_op1[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vftintrp_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x3a800000;
+  *((int*)& __m128_op0[2]) = 0x3a800000;
+  *((int*)& __m128_op0[1]) = 0x000ef000;
+  *((int*)& __m128_op0[0]) = 0x0000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vftintrp_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x10404000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x09610001;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000001a;
+  *((int*)& __m128_op0[2]) = 0xfffffff7;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000000202fe02;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128d_op1[0]) = 0xffff00fc0000ff02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00040004;
+  *((int*)& __m128_op0[2]) = 0x00040004;
+  *((int*)& __m128_op0[1]) = 0x00040004;
+  *((int*)& __m128_op0[0]) = 0x00040004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00ffff00;
+  *((int*)& __m128_op0[2]) = 0xff00ff00;
+  *((int*)& __m128_op0[1]) = 0x00ffff00;
+  *((int*)& __m128_op0[0]) = 0xff00ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x013ec13e;
+  *((int*)& __m128_op0[1]) = 0xc03fc03f;
+  *((int*)& __m128_op0[0]) = 0xc0ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffdfffffff8;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x7fffffff7ffffffb;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x43800000;
+  *((int*)& __m128_op0[0]) = 0x43800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000100;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000014;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xfffffffe;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x02020004;
+  *((int*)& __m128_op0[2]) = 0x02020202;
+  *((int*)& __m128_op0[1]) = 0x00002000;
+  *((int*)& __m128_op0[0]) = 0x00010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xfffffff7;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x80307028ffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x8040007fffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xff84fff4;
+  *((int*)& __m128_op0[2]) = 0xff84fff4;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xfffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x7fff7fff;
+  *((int*)& __m128_op0[2]) = 0x7fff7fff;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0x0000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x000000ff;
+  *((int*)& __m128_op0[2]) = 0x808000ff;
+  *((int*)& __m128_op0[1]) = 0x000000ff;
+  *((int*)& __m128_op0[0]) = 0x808000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x7fff0007e215b122;
+  *((unsigned long*)& __m128d_op1[0]) = 0x7ffeffff7bfff828;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x7f7f7f7f;
+  *((int*)& __m128_op0[1]) = 0x00000001;
+  *((int*)& __m128_op0[0]) = 0x00000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x07ffc000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffff0000;
+  *((int*)& __m128_op0[0]) = 0x0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00002000;
+  *((int*)& __m128_op0[2]) = 0x00002000;
+  *((int*)& __m128_op0[1]) = 0x10000000;
+  *((int*)& __m128_op0[0]) = 0x10000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000001;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xf039b8c0;
+  *((int*)& __m128_op0[2]) = 0xc61e81ef;
+  *((int*)& __m128_op0[1]) = 0x6db7da53;
+  *((int*)& __m128_op0[0]) = 0xfbd2e34b;
+  *((unsigned long*)& __m128i_result[1]) = 0x80000000ffffd860;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff80000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x67eb85af;
+  *((int*)& __m128_op0[2]) = 0xb2ebb000;
+  *((int*)& __m128_op0[1]) = 0xc8847ef6;
+  *((int*)& __m128_op0[0]) = 0xed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_wu_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00020000;
+  *((int*)& __m128_op0[0]) = 0xffff0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00027113;
+  *((int*)& __m128_op0[2]) = 0x50a27112;
+  *((int*)& __m128_op0[1]) = 0x00d57017;
+  *((int*)& __m128_op0[0]) = 0x94027113;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xff80ff80;
+  *((int*)& __m128_op0[2]) = 0x7e017f01;
+  *((int*)& __m128_op0[1]) = 0x7f3b7f3f;
+  *((int*)& __m128_op0[0]) = 0x7f3b7f21;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vftintrz_w_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000011ff040;
+  *((unsigned long*)& __m128d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op1[1]) = 0x00000000047fe2f0;
+  *((unsigned long*)& __m128d_op1[0]) = 0x00000000047fe2f0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_w_d(__m128d_op0,__m128d_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00d4ccb8;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00124888;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xfff00000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0xfff00000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x80000000;
+  *((int*)& __m128_op0[2]) = 0xffffd860;
+  *((int*)& __m128_op0[1]) = 0x7fffffff;
+  *((int*)& __m128_op0[0]) = 0x80000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00008000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00008000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xff80ffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x7ffffffe;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x4f804f80;
+  *((int*)& __m128_op0[0]) = 0x4f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x0000007b;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000600;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x3f800000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x04870ba0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00009c7c;
+  *((int*)& __m128_op0[0]) = 0x00007176;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0xffffffff;
+  *((int*)& __m128_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x0667ae56;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftinth_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xffffffff;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x887c8beb;
+  *((int*)& __m128_op0[2]) = 0x969e00f2;
+  *((int*)& __m128_op0[1]) = 0x101f8b68;
+  *((int*)& __m128_op0[0]) = 0x0b6f8095;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00020000;
+  *((int*)& __m128_op0[2]) = 0x00020000;
+  *((int*)& __m128_op0[1]) = 0x000001fc;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00020000;
+  *((int*)& __m128_op0[0]) = 0xffff0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x0a752a55;
+  *((int*)& __m128_op0[1]) = 0x0a753500;
+  *((int*)& __m128_op0[0]) = 0xa9fa0d06;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrnel_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0xffffffff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x7fffffff;
+  *((int*)& __m128_op0[2]) = 0x7fffffff;
+  *((int*)& __m128_op0[1]) = 0x7fffffff;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x000d0254;
+  *((int*)& __m128_op0[2]) = 0x0000007e;
+  *((int*)& __m128_op0[1]) = 0x00000014;
+  *((int*)& __m128_op0[0]) = 0x00140014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x39412488;
+  *((int*)& __m128_op0[0]) = 0x80000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000014;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00010001;
+  *((int*)& __m128_op0[2]) = 0x00010001;
+  *((int*)& __m128_op0[1]) = 0x00010001;
+  *((int*)& __m128_op0[0]) = 0x00010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x2e34594c;
+  *((int*)& __m128_op0[0]) = 0x3b000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrpl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000004000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff80ffffffffff80;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000ff80ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00ff00ff;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0xfffefffe;
+  *((int*)& __m128_op0[2]) = 0xfffeffff;
+  *((int*)& __m128_op0[1]) = 0xfffefffe;
+  *((int*)& __m128_op0[0]) = 0xfffeffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x00000000;
+  *((int*)& __m128_op0[2]) = 0x00000000;
+  *((int*)& __m128_op0[1]) = 0x00000000;
+  *((int*)& __m128_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((int*)& __m128_op0[3]) = 0x0000033a;
+  *((int*)& __m128_op0[2]) = 0x0bde0853;
+  *((int*)& __m128_op0[1]) = 0x0a960e6b;
+  *((int*)& __m128_op0[0]) = 0x0a4f0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrzl_l_s(__m128_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000400000007004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x3c600000ff800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x6a57a30ff0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000000000040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x004f1fcfd01f9f9f;
+  *((unsigned long*)& __m128d_op0[0]) = 0x9f4fcfcfcf800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010001;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x9c7c266e3faa293c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x000000009c83e21a;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000022001818;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftint_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrne_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000015d926c7;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000000000e41b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrp_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000777777777777;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffff7777ffff7777;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vftintrp_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrp_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000004000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xff80ffffffffff80;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000ff80ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrm_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00000000b5207f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x800000001d64c200;
+  *((unsigned long*)& __m128d_op0[0]) = 0x800000001d881120;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000f0009d3c;
+  *((unsigned long*)& __m128d_op0[0]) = 0x000000016fff9dff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0xc0f3fa0080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffec060;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000040a04000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000040a04000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128d_op0[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_l_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128d_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128d_op0[0]) = 0x03fc03fc03fc03fc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vftintrz_lu_d(__m128d_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m128d_result[0]) = 0xbff0000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003dbe88077c78c1;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x40cd120000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x4050000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0086000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0082000000000007;
+  *((unsigned long*)& __m128d_result[1]) = 0x4160c00000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x4110000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff8000010f800000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000051649b6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000003e0000003f;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x41945926d8000000;
+  __m128d_out = __lsx_vffinth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe82fe0200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe82fe0200000000;
+  *((unsigned long*)& __m128d_result[1]) = 0xc177d01fe0000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128d_result[1]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x40f0001000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128d_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x40f3fa0000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xc0fffff000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffintl_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8493941335f5cc0c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x625a7312befcb21e;
+  *((unsigned long*)& __m128d_result[1]) = 0x43e092728266beba;
+  *((unsigned long*)& __m128d_result[0]) = 0x43d8969cc4afbf2d;
+  __m128d_out = __lsx_vffint_d_lu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03ff03ff03ff03ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x438ff81ff81ff820;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffint_d_l(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffint_d_lu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128d_result[1]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128d_result[0]) = 0x43d3e0000013e000;
+  __m128d_out = __lsx_vffint_d_l(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffint_d_l(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffint_d_lu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffint_d_l(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0x0000000000000000;
+  __m128d_out = __lsx_vffint_d_lu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128d_result[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m128d_result[0]) = 0xbff0000000000000;
+  __m128d_out = __lsx_vffint_d_l(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0674c8868a74fc80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdce8003090b0906;
+  *((unsigned long*)& __m128d_result[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128d_result[0]) = 0xc3818bffe7b7a7b8;
+  __m128d_out = __lsx_vffint_d_l(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128d_result, __m128d_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_wu(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((int*)& __m128_result[3]) = 0x4b7f00ff;
+  *((int*)& __m128_result[2]) = 0x4b7f00ff;
+  *((int*)& __m128_result[1]) = 0x4b7f00ff;
+  *((int*)& __m128_result[0]) = 0x4b7f00ff;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000004;
+  *((int*)& __m128_result[3]) = 0x40800000;
+  *((int*)& __m128_result[2]) = 0x4b800000;
+  *((int*)& __m128_result[1]) = 0x47800080;
+  *((int*)& __m128_result[0]) = 0x40800000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001600000016;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001600000016;
+  *((int*)& __m128_result[3]) = 0x41b00000;
+  *((int*)& __m128_result[2]) = 0x41b00000;
+  *((int*)& __m128_result[1]) = 0x41b00000;
+  *((int*)& __m128_result[0]) = 0x41b00000;
+  __m128_out = __lsx_vffint_s_wu(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x47000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((int*)& __m128_result[3]) = 0x4f800000;
+  *((int*)& __m128_result[2]) = 0x4f800000;
+  *((int*)& __m128_result[1]) = 0x4f800000;
+  *((int*)& __m128_result[0]) = 0x4f800000;
+  __m128_out = __lsx_vffint_s_wu(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000442800007b50;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff0204;
+  *((int*)& __m128_result[3]) = 0x46885000;
+  *((int*)& __m128_result[2]) = 0x46f6a000;
+  *((int*)& __m128_result[1]) = 0x4f800000;
+  *((int*)& __m128_result[0]) = 0x4f7fff02;
+  __m128_out = __lsx_vffint_s_wu(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x76f424887fffffff;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x3f800000;
+  *((int*)& __m128_result[1]) = 0x4eede849;
+  *((int*)& __m128_result[0]) = 0x4f000000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa352bfac9269e0aa;
+  *((int*)& __m128_result[3]) = 0xce23d33d;
+  *((int*)& __m128_result[2]) = 0x4edd53ea;
+  *((int*)& __m128_result[1]) = 0xceb95a81;
+  *((int*)& __m128_result[0]) = 0xcedb2c3f;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x3f800000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_wu(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_wu(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000003ff8;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x467fe000;
+  __m128_out = __lsx_vffint_s_w(__m128i_op0);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0xbf800000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xcf000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x5eff0000;
+  *((int*)& __m128_result[2]) = 0x5eff0000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000e3;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfda9b23a624082fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((int*)& __m128_result[3]) = 0x43630000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0xdc159371;
+  *((int*)& __m128_result[0]) = 0x4f7fff00;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000040;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x42800000;
+  *((int*)& __m128_result[0]) = 0x42800000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000100;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x43800000;
+  *((int*)& __m128_result[0]) = 0x43800000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x00000000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001effae001effae;
+  *((int*)& __m128_result[3]) = 0x00000000;
+  *((int*)& __m128_result[2]) = 0x00000000;
+  *((int*)& __m128_result[1]) = 0x59f7fd70;
+  *((int*)& __m128_result[0]) = 0x59f7fd70;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000ef0000000003b;
+  *((int*)& __m128_result[3]) = 0x577fff00;
+  *((int*)& __m128_result[2]) = 0x577fff00;
+  *((int*)& __m128_result[1]) = 0x00000000;
+  *((int*)& __m128_result[0]) = 0x596f0000;
+  __m128_out = __lsx_vffint_s_l(__m128i_op0,__m128i_op1);
+  ASSERTEQ_32(__LINE__, __m128_result, __m128_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-int-arith.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-int-arith.c
new file mode 100644
index 00000000000..8771c88cf74
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-int-arith.c
@@ -0,0 +1,22424 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002010000fc000b;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000017fda829;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000001fffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f7f7f7f00107f04;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f0000fd7f0000fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x7e7e7e7eff0f7f04;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f0000fd7f01fffb;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6a1a3fbb3c90260e;
+  *((unsigned long*)& __m128i_result[1]) = 0x19df307a5d04acbb;
+  *((unsigned long*)& __m128i_result[0]) = 0x5ed032b06bde1ab6;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5555001400005111;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffabbeab55110140;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5555001400005111;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffabbeab55110140;
+  *((unsigned long*)& __m128i_result[1]) = 0xaaaa00280000a222;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe567c56aa220280;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0982e2daf234ed87;
+  *((unsigned long*)& __m128i_result[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_result[0]) = 0x0982e2daf234ed87;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000073;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000002a;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000049000000c0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ffffff29;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000bd3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000bd30;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000d7fff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000007a6d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000dfefe0000;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefa000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefefefefefefefe;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0038000000051fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003c000000022021;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff0101ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffffa0204000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f370101ff04ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f3bffffa0226021;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1baf8eabd26bc629;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1c2640b9a8e9fb49;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002dab8746acf8e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00036dd1c5c15856;
+  *((unsigned long*)& __m128i_result[1]) = 0x1bb1686346d595b7;
+  *((unsigned long*)& __m128i_result[0]) = 0x1c29ad8a6daa539f;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfeffffffffff0002;
+  __m128i_out = __lsx_vadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001ffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000c3080000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff81ffffc3080000;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004200a000200001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x004200a000200001;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001f0000001f;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0029aeaca57d74e6;
+  *((unsigned long*)& __m128i_op0[0]) = 0xdbe332365392c686;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000056f64adb9464;
+  *((unsigned long*)& __m128i_op1[0]) = 0x29ca096f235819c2;
+  *((unsigned long*)& __m128i_result[1]) = 0x002a05a2f059094a;
+  *((unsigned long*)& __m128i_result[0]) = 0x05ad3ba576eae048;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000040d;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001300000013;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001000000ff;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000300000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000002fffffffb;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010000fffb;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000060000000e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001201fe01e9;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000060000000e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001201fe01e9;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000c0000001c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002403fc03d2;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff1000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff1000100010001;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa352bfac9269e0aa;
+  *((unsigned long*)& __m128i_result[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_result[0]) = 0xa352bfac9269e0aa;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffa;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001001100110068;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001001100110067;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x379674c000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3789f68000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x379674c000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3789f68000000000;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000555889;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002580f01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00060fbf02040fbf;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00020fbf02000fbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x00060fbf02596848;
+  *((unsigned long*)& __m128i_result[0]) = 0x00020fbf04581ec0;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001388928513889;
+  *((unsigned long*)& __m128i_op0[0]) = 0x006938094a013889;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001388928513889;
+  *((unsigned long*)& __m128i_op1[0]) = 0x006938094a013889;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002711250a27112;
+  *((unsigned long*)& __m128i_result[0]) = 0x00d2701294027112;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_result[1]) = 0x202544f490f2de35;
+  *((unsigned long*)& __m128i_result[0]) = 0x202544f490f2de35;
+  __m128i_out = __lsx_vadd_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ff02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000001fe;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc6ffe000c6fde000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808081;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_result[0]) = 0x467f6080467d607f;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00fe00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00fe00fe00ff;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff00007fff0000;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c0dec4d1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000040223c2e;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfd200ed2fd370775;
+  *((unsigned long*)& __m128i_op0[0]) = 0x96198318780e32c5;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe65ecc1be5bc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe65ecc1be5bc;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe212874311c22b9;
+  *((unsigned long*)& __m128i_result[0]) = 0x971a9dbaacf34d09;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f00004f4f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f00004f4f0000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f4f4f4f4f4f4f4f;
+  __m128i_out = __lsx_vsub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf000e001bf84df83;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff8e001ff84e703;
+  *((unsigned long*)& __m128i_result[1]) = 0x14042382c3ffa481;
+  *((unsigned long*)& __m128i_result[0]) = 0x040c238283ff9d01;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_result[1]) = 0xfebffefffebffeff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfebffefffebffeff;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111111111111111;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000700000004fdff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000300000000fdff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff7fffefffa01ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffbfffefffe01ff;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000cd630000cd63;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000329d0000329d;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x08080807f7f7f7f8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x08080805f5f5f5f8;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00060eb000000006;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000075c00000cf0;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffaf1500000fffa;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000f8a40000f310;
+  __m128i_out = __lsx_vsub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff100fffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffdf100fffc;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000010;
+  __m128i_out = __lsx_vsub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vsub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001802041b0013;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001802041b0014;
+  __m128i_out = __lsx_vsub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000f7d1000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x773324887fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff082efffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x88cbdb7780000001;
+  __m128i_out = __lsx_vsub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001f50000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffe0b0000;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000000000001;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffeb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffeb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000015;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007000000050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fffe0001fefc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0006000100040001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00010002ffff0105;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000003fffffffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000003fffffffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000003fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000003fffffffd;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363abdf16;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_result[1]) = 0x9c9d9b9bbfaa20e9;
+  *((unsigned long*)& __m128i_result[0]) = 0xbe081c963e6fee68;
+  __m128i_out = __lsx_vsub_q(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x1414141414141415;
+  *((unsigned long*)& __m128i_result[0]) = 0x1414141414141415;
+  __m128i_out = __lsx_vaddi_bu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0505050505050505;
+  *((unsigned long*)& __m128i_result[0]) = 0x0505050504040404;
+  __m128i_out = __lsx_vaddi_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000008140c80;
+  *((unsigned long*)& __m128i_result[1]) = 0x1f1f1f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_result[0]) = 0x1f1f1f1f27332b9f;
+  __m128i_out = __lsx_vaddi_bu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_result[0]) = 0x0303030303030304;
+  __m128i_out = __lsx_vaddi_bu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x8f8f8f8f8f8f8f8f;
+  *((unsigned long*)& __m128i_result[0]) = 0x8f8f8f8f8f8f8f8f;
+  __m128i_out = __lsx_vaddi_bu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0018001800180018;
+  *((unsigned long*)& __m128i_result[0]) = 0x0018001800180018;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0019081900190019;
+  *((unsigned long*)& __m128i_result[0]) = 0x0019081900190019;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x000a000a000a000a;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc1000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffcc000b000b000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000b000b010a000b;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m128i_result[0]) = 0x001f001f001f001f;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001c001c001c001c;
+  *((unsigned long*)& __m128i_result[0]) = 0x001c001c001c001c;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x680485c8b304b019;
+  *((unsigned long*)& __m128i_result[0]) = 0xc89d7f0fed582019;
+  __m128i_out = __lsx_vaddi_hu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000a0000000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000a0000000a;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000090100000a;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe009ffff2008;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000300000003;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fcfffe01fd01;
+  *((unsigned long*)& __m128i_result[1]) = 0xfc01fd13fc02fe0c;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe00fd14fe01fd16;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001300000013;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000bd3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000c7fff000c;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000500000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000005fffe0006;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fffffeff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000009ffffff08;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000900000009;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x55aa55aa55aa55ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaa55555655aaaaa8;
+  *((unsigned long*)& __m128i_result[1]) = 0x55aa55c355aa55c4;
+  *((unsigned long*)& __m128i_result[0]) = 0xaa55556f55aaaac1;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000e0000002e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000e0000004e;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000004;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x003f000400000003;
+  *((unsigned long*)& __m128i_result[0]) = 0x003f000400000003;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff8000010f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_result[0]) = 0xff80000a0f800009;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb020302101b03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_result[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_result[0]) = 0x020310edc003023d;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x59f7fd7059f7fd70;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_result[0]) = 0x59f7fd8759f7fd87;
+  __m128i_out = __lsx_vaddi_wu(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6420e0208400c4c4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x20c4e0c4e0da647a;
+  *((unsigned long*)& __m128i_result[1]) = 0x6420e0208400c4e3;
+  *((unsigned long*)& __m128i_result[0]) = 0x20c4e0c4e0da6499;
+  __m128i_out = __lsx_vaddi_du(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e1d001b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x21201f1e1d001b25;
+  *((unsigned long*)& __m128i_result[0]) = 0x191817161514131d;
+  __m128i_out = __lsx_vaddi_du(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vaddi_du(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007770ffff9411;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007770ffff941d;
+  __m128i_out = __lsx_vaddi_du(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000016;
+  __m128i_out = __lsx_vaddi_du(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000080000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vaddi_du(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff489b693120950;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffc45a851c40c18;
+  *((unsigned long*)& __m128i_result[1]) = 0xe0d56a9774f3ea31;
+  *((unsigned long*)& __m128i_result[0]) = 0xe0dd268932a5edf9;
+  __m128i_out = __lsx_vsubi_bu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffff88;
+  *((unsigned long*)& __m128i_result[1]) = 0xe5e5e5e5e5e5e5e5;
+  *((unsigned long*)& __m128i_result[0]) = 0xe5e5e5e5e4e4e46d;
+  __m128i_out = __lsx_vsubi_bu(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000408;
+  *((unsigned long*)& __m128i_result[1]) = 0xf7f7f7ff8e8c6d7e;
+  *((unsigned long*)& __m128i_result[0]) = 0xf7f7f7f7f7f7fbff;
+  __m128i_out = __lsx_vsubi_bu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xe6e6e6e6e6e6e6e6;
+  *((unsigned long*)& __m128i_result[0]) = 0xe6e6e6e6e6e6e6e6;
+  __m128i_out = __lsx_vsubi_bu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m128i_result[0]) = 0xf8f8f8f8f8f8f8f8;
+  __m128i_out = __lsx_vsubi_bu(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2e34594c3b000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m128i_result[0]) = 0x171d423524e9e9e9;
+  __m128i_out = __lsx_vsubi_bu(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffe2ffe2ffe2ffe2;
+  *((unsigned long*)& __m128i_result[0]) = 0xffe2ffe2ffe2ffe2;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9795698585057dec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x87f82867431a1d08;
+  *((unsigned long*)& __m128i_result[1]) = 0x9780697084f07dd7;
+  *((unsigned long*)& __m128i_result[0]) = 0x87e3285243051cf3;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffcfffcfffc;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffcfffcfffc00fd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffcfffcfffc;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x371fe00000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x371fe00000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[0]) = 0x370bdfecffecffec;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040600000406;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020202020202fe02;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff503fbfff503fb;
+  *((unsigned long*)& __m128i_result[0]) = 0x01f701f701f7fdf7;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffdfffdfffdfffd;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x803e0000803e0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x803e0000803e0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x803bfffd803bfffd;
+  *((unsigned long*)& __m128i_result[0]) = 0x803bfffd803bfffd;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffdfffdfffdfffd;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffedffedffedffed;
+  *((unsigned long*)& __m128i_result[0]) = 0xffedffedffedffed;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffe4ffe4ffe4ffe4;
+  *((unsigned long*)& __m128i_result[0]) = 0xffe4ffe4ffe4ffe4;
+  __m128i_out = __lsx_vsubi_hu(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffefffffffef;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffe6ffffffe6;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffe6ffffffe6;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff1fffffff1;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff6fffffff6;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff6fffffff6;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffe4ffffffe4;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffe1ffffffe1;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffe1ffffffe1;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff1fffffff1;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffab7e71e33848;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffe1ffffffe1;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffab5f71e33829;
+  __m128i_out = __lsx_vsubi_wu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa8beed87bc3f2be1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0024d8f6a494006a;
+  *((unsigned long*)& __m128i_result[1]) = 0xa8beed87bc3f2bd3;
+  *((unsigned long*)& __m128i_result[0]) = 0x0024d8f6a494005c;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffeb;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffeb;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffe1;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff7;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffe5;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf2f2e5e5e5e5e5e5;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf2f2e5e5e5e5e5dc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff7;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ffffeffffffffe5;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ffffeffffffffe5;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000070;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff5;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff0;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffe6;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffe6;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100010000fffb;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010000fffb;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffeb;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffeb;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffa;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffdfffe80008000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffe2;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffdfffe80007fe2;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001a001a001a001a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001a001a001a001a;
+  *((unsigned long*)& __m128i_result[1]) = 0x001a001a001a000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x001a001a001a000b;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000234545b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c0dec4d1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000002345454;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c0dec4ca;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0f8d33000f8d3300;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003b80000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0f8d33000f8d32fd;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003b7fffffffffd;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubi_du(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffeffffffff;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffefffffffc;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff00ffffff01;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000fff3;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff0001ffffff0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100000101;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100ff010101f6;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff000000ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100000001000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100010000000000;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffffeff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffbff8888080a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x080803ff807ff7f9;
+  *((unsigned long*)& __m128i_result[1]) = 0x010105017878f8f6;
+  *((unsigned long*)& __m128i_result[0]) = 0xf8f8fd0180810907;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000300000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffdffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffeffff;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x441ba9fcffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x181b2541ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xbbe5560400010001;
+  *((unsigned long*)& __m128i_result[0]) = 0xe7e5dabf00010001;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000060a3db;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa70594c000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ff9f5c25;
+  *((unsigned long*)& __m128i_result[0]) = 0x58fa6b4000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vneg_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000008000001e;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff7fffffe2;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_result[0]) = 0x377b810912c0e000;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffc00001ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x003ffffe00800000;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000000;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vneg_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x087c000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000087c;
+  *((unsigned long*)& __m128i_result[1]) = 0xf784000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffff784;
+  __m128i_out = __lsx_vneg_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vneg_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffeffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffeffffffff;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c03e17edd781b11;
+  *((unsigned long*)& __m128i_op0[0]) = 0x342caf9be55700b5;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00040003ff83ff84;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00040003ff4dffca;
+  *((unsigned long*)& __m128i_result[1]) = 0x0c07e181ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x3430af9effffffff;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefefefefefefefe;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd83c8081ffff8080;
+  *((unsigned long*)& __m128i_result[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_result[0]) = 0xd83c8081ffff808f;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3c600000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x3c5fffffff7fffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffeff00feff;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000040d;
+  __m128i_out = __lsx_vsadd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x203e16d116de012b;
+  *((unsigned long*)& __m128i_result[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_result[0]) = 0x203e16d116de012b;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefff6fff80002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x82c53a0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc72ef153fc02fdf7;
+  *((unsigned long*)& __m128i_result[1]) = 0x82c539ffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xc72df14afbfafdf9;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff3c992b2e;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff730f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff3c992b2e;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff730f;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffebd06fffe820c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7ffe7fff3506;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffebd06fffe820c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7ffe7fff3506;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0cffffff18;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefffefffeff6a0c;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fefefe68;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c2bac2c2;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0x028c026bfff027af;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000002bfd9461;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007000000040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000000010000;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f7fff003f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7fff003f800000;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffa8ff9f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffffffabff99;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000100000002007d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000020001;
+  *((unsigned long*)& __m128i_result[1]) = 0x00010000ffab001c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001ffffffadff9a;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffcfffcfffcfffd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffdfffcfffd;
+  __m128i_out = __lsx_vsadd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0800080008000800;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001021;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001021;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x40f3fa0000000000;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x76f424887fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc110000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc00d060000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xc110000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff7fffffff;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000008a0000008a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000008900000009;
+  *((unsigned long*)& __m128i_op1[1]) = 0x63637687636316bb;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x6363771163631745;
+  *((unsigned long*)& __m128i_result[0]) = 0x636363ec6363636c;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000820202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fe01fc0005fff4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003dbe88077c78c1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000820205a44;
+  *((unsigned long*)& __m128i_result[0]) = 0x013bc084078278b5;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000029;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfbfbfb17fbfb38ea;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfbfb47fbfbfb0404;
+  *((unsigned long*)& __m128i_result[1]) = 0xfbfbfb17fbfb3919;
+  *((unsigned long*)& __m128i_result[0]) = 0xfbfb47fbfbfb042d;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff60ca7104649;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff790a15db63d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffff60ca710464a;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff790a15db63e;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000140001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000140001;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff46;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000d0000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8006000000040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8002000000000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x8006000000040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8002000d00000014;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808081;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x80808080ffffffff;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001000;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fc03fc000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f801fdfffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fc03fc000000003;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00fe000100cf005f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5f675e96e29a5a60;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x5fff5e97e2ff5abf;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefffefffefffeff;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000003000000d613;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c0000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000003000000d612;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000bfffffff;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_op0[0]) = 0x007f800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_result[0]) = 0x81000080806b000b;
+  __m128i_out = __lsx_vsadd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x80808080806b000b;
+  __m128i_out = __lsx_vsadd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000100010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001001100110068;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vsadd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfeffffffffffffff;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00123fff00120012;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0012001200120012;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000005003a;
+  *((unsigned long*)& __m128i_result[1]) = 0x00123fff00120012;
+  *((unsigned long*)& __m128i_result[0]) = 0x001200120017004c;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67157b5100005000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x387c7e0a133f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_result[1]) = 0x67157b5100005000;
+  *((unsigned long*)& __m128i_result[0]) = 0x387c7e0a511b7dac;
+  __m128i_out = __lsx_vsadd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x67eb85b0b2ebb001;
+  *((unsigned long*)& __m128i_result[0]) = 0xc8847ef6ed3f2000;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x98147a4f4d144fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x377b810812c0dfff;
+  *((unsigned long*)& __m128i_result[1]) = 0x98137a4d4d144fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x377a810612c0dfff;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc5c534920000c4ed;
+  *((unsigned long*)& __m128i_result[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_result[0]) = 0xc5c534920000c4ed;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffffffe;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000600007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000008ffffa209;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000600007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000008ffffa209;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000aa822a79308f6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000aa822a79308f6;
+  *((unsigned long*)& __m128i_op1[0]) = 0x03aa558e1d37b5a1;
+  *((unsigned long*)& __m128i_result[1]) = 0x00155044ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x03aa558e2584c86f;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030298a6a1030a49;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_result[0]) = 0x030298a6a1030a49;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2006454652525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2006454652525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000014eb54ab;
+  *((unsigned long*)& __m128i_op1[0]) = 0x14eb6a002a406a00;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff14eb54ab;
+  *((unsigned long*)& __m128i_result[0]) = 0x14ea6a002a406a00;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xce9035c49ffff570;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[0]) = 0xce9035c49ffff574;
+  __m128i_out = __lsx_vsadd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007a8000000480;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000485000004cc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00007a8000000480;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000485000004cc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000090a00000998;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x636363633f3e47c1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e080f1ef4eaa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000807bf0a1f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000800ecedee68;
+  *((unsigned long*)& __m128i_result[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_result[0]) = 0x41f8e880ffffffff;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffb81a6f70;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000d48eaa1a2;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffb81ae0bf;
+  *((unsigned long*)& __m128i_result[0]) = 0x00012c9748eaffff;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004eff6200d2ff76;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff70002800be00a0;
+  *((unsigned long*)& __m128i_result[1]) = 0x004eff6200d2ff76;
+  *((unsigned long*)& __m128i_result[0]) = 0xff70002800be00a0;
+  __m128i_out = __lsx_vsadd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000e29e;
+  *((unsigned long*)& __m128i_result[0]) = 0x000259140000ffff;
+  __m128i_out = __lsx_vsadd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f8000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001000010f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x10f8000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001000010f8;
+  __m128i_out = __lsx_vsadd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0177fff0fffffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000011ff8bc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsadd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f801fa06451ef11;
+  *((unsigned long*)& __m128i_op0[0]) = 0x68bcf93435ed25ed;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffb64c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000003900;
+  *((unsigned long*)& __m128i_result[0]) = 0x68bcf93435ed25ed;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x460f3b393ef4be3a;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x04e00060ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04e00060ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x04e00060ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x04e00060ffffffff;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004200a000200001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000001c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000001c;
+  *((unsigned long*)& __m128i_result[1]) = 0x004200a000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x004200a000200000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c03e17edd781b11;
+  *((unsigned long*)& __m128i_op0[0]) = 0x342caf9be5579ebe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000f909;
+  *((unsigned long*)& __m128i_result[1]) = 0x0c03e17edd781b11;
+  *((unsigned long*)& __m128i_result[0]) = 0x342caf9be55700b5;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf436f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf4b6f3f52f4ef4a8;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001fc0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000002010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001fbdff0;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000101fd01fe;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001801f0307f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001801f0307f80;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010108082626;
+  *((unsigned long*)& __m128i_result[0]) = 0x01010101ffff7878;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffdfffffffe0;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffdfffffffe0;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000c2fa8000c2fa;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff3d06ffff4506;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ffffffe7ffff800;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fe000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe80000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x027e0000000000ff;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f000d200e000c20;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f000d200e000c20;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc14eef7fc14ea000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000ea000010fa101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff3fffffff4;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff3fffffff4;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0x000fffefffefffef;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000c7fff000c;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xb);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000bd3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000c7fff000c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000006ffef000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x006f0efe258ca851;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff90ffffe0f5;
+  *((unsigned long*)& __m128i_result[0]) = 0x006e7973258d0ef4;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0040004000400040;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m128i_result[0]) = 0xffc0ffc0ffc0ffc0;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc2ffe700000007;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffc100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x41dfffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbde2ffe800000007;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffc100010001;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111311111114111;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111311111112111;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111311111114111;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111311111110000;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ca02f854;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffff98dea;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xc00fffffffffb4ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xbf0c05fffff98dea;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000029;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x010101010101012f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010129;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_result[1]) = 0x1202120212021202;
+  *((unsigned long*)& __m128i_result[0]) = 0x1202120212021202;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0202fe02fd020102;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefcfefcfefcfefc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfcfc00fc01fcfdfc;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03f1e3d28b1a8a1a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000001d5d4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000150d707009;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffe2a2c;
+  *((unsigned long*)& __m128i_result[0]) = 0x03f1e3bd80000000;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ef8000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8108000000000000;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffd5002affffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x343d8dc6b0ed5a08;
+  *((unsigned long*)& __m128i_result[1]) = 0x002affd600000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xcbc2723a4f12a5f8;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000d0000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x6363635663636356;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000063b2ac27;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffaa076aeb;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff63b3584e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffdaa07d5d6;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00004000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff81;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff7c;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fc0010181020103;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fc0ffff81020103;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff7cffd6ffc700b0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x008300290038ff50;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4e3e13368c17f6e6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc5c53492f25acbf2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_result[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_result[0]) = 0xc5c534920000c4ed;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001e03;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000011e04;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c0dec4d1;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff3f213b2f;
+  __m128i_out = __lsx_vssub_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363abdf16;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000246d9755;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002427c2ee;
+  *((unsigned long*)& __m128i_result[1]) = 0x636363633f3e47c1;
+  *((unsigned long*)& __m128i_result[0]) = 0x41f8e080f1ef4eaa;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41957fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0xbf6b810181018101;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xa000308000008002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0500847b00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001fffe00014b41;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fffe0001ffde;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0002ffffb4bf;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0002ffff0022;
+  __m128i_out = __lsx_vssub_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff8ffa2fffdffb0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0108015e01030150;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000017f0000;
+  __m128i_out = __lsx_vssub_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vssub_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa8beed87bc3f2be1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0024d8f6a494006a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001a8beed86;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010024d8f5;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4ee85545068f3133;
+  *((unsigned long*)& __m128i_op0[0]) = 0x870968c1f56bb3cd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x004e005500060031;
+  *((unsigned long*)& __m128i_result[0]) = 0xff870068fff5ffb3;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4ee85545ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x870968c1f56bb3cd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x870968c1f56bb3cd;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff02000200;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe00001ffe200;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000383;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe400000003ffc001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe000ffff2382;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ff0000;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080000000000000;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffebe6ed565;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffebe6ed565;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffbe6ed563;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e39e496cbc9;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x03574e38e496cbc9;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000013d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0006000200000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0006000200000000;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000750500006541;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000100fffffefd;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffe;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x78c00000ff000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000078c00000;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000078c00000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6a57a30ff0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000f0000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000040d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00ffff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffe000000f6;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xabff54e911f71b07;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa9ec4882f216ea11;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe00fcfffe01fd01;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xaa0051e90ff91808;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7d3ac60000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007d3ac600;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0052005200520052;
+  *((unsigned long*)& __m128i_result[0]) = 0x0052005200520052;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001fffffffe;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001fffffffe;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000fffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff000000ff;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffff82bb9784;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffffc6bb97ac;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff82bb9784;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffc6bb97ac;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000030000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000030000003f;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00060012000e002b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000049ffffffaa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000060000000e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000127fffffea;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000060000000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001201fe01e9;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00009f0000009f00;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x80000000b57ec564;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000083ff0be0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001b57ec563;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000183ff0bdf;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000bd3d00000000;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000007f00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000007f00000000;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff082f000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc04d600d3aded151;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x004cff8fffde0051;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ef400ad21fc7081;
+  *((unsigned long*)& __m128i_op1[0]) = 0x28bf0351ec69b5f2;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ad00007081;
+  *((unsigned long*)& __m128i_result[0]) = 0x000003510000b5f2;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffff000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4050000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000f80007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000f8;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000e2e3ffffd1d3;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000008000e2e3;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007f008000ea007f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200010002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200010002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000010004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0001ffff9514;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000001b0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000001b0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000001b001b;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x003f0000ffffffff;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000003effff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000003effff;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0042003e0042002f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fffc0001fffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0042003e0042002f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001fffc0001fffc;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80ffffffffff80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff80ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffffffe;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5252adadadadadad;
+  *((unsigned long*)& __m128i_op1[0]) = 0xadad52525252adad;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000adad0000adad;
+  *((unsigned long*)& __m128i_result[0]) = 0x000052520000adad;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000004870ba0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000004870ba0;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x9c9c9c9c9c9c9c9c;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf359f359f359f359;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf359f359f359f359;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffff359f358;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffff359f358;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000010000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000010000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff8000010f800000;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff8000010f78;
+  *((unsigned long*)& __m128i_op1[1]) = 0x002a001a001a000b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001a0000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000400000001;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007fff7fff8000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xce9035c49ffff570;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0xce9035c49ffff574;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000454ffff9573;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80ff807e017f01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f3b7f3f7f3b7f21;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0a0000001e000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a000000f6000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0980ff8174017f01;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80007fc000003f00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7d187e427c993f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7500000075000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7500000075000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007d1800007c99;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7500000075007500;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00feff8000ff80ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007d1800007c99;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000f50000007500;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007e1600007d98;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff760386bdae46;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc1fc7941bc7e00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0802080408060803;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff000086bd;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ca000000c481;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000ef0000000003b;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000007fff9;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff2356fe165486;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5efeb3165bd7653d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000235600005486;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000b31600006544;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000e2e36363;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000063636363;
+  __m128i_out = __lsx_vhaddw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c83e21a22001818;
+  *((unsigned long*)& __m128i_op0[0]) = 0xdd3b8b02563b2d7b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00012c8a0000a58a;
+  __m128i_out = __lsx_vhaddw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000011ff040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000011ff040;
+  __m128i_out = __lsx_vhaddw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5555000054100000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5555000154100155;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000155;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhaddw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000052527d7d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000052527d7d;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff00000000;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002400180004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000024;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffffc00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010000;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffff02;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0f180000ffe00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff0000010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe00fe00fe00fd01;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe00fffefe0100f6;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff0001ffffff0a;
+  __m128i_out = __lsx_vhsubw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff0000010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xabff54f1ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa5f7458b000802ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fff7fc01;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000100010;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vhsubw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000008000000080;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff07effffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100110002;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000100c6ffef10c;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffff70;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff9001a47e;
+  __m128i_out = __lsx_vhsubw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000100c6ffef10c;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffff01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffeff400000df4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ff91fffffff5;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00650001ffb0;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000bfffffffe0f6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010001000a;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000002;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000002;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0039ffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffbeffffffffffff;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ca02f854;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000004b01;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffb4ff;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000017161515;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000095141311;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x76f424887fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000170014;
+  *((unsigned long*)& __m128i_result[0]) = 0xff0cff78ff96ff14;
+  __m128i_out = __lsx_vhsubw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000008140c80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0037ffdfffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0037ffdfffeb007f;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x002affd600000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcbc2723a4f12a5f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x343d8dc5b0ed5a08;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000a6;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff59;
+  __m128i_out = __lsx_vhsubw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001b4a00007808;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe4b5ffff87f8;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffff01;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fc03fc000000003;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f7f1fd800000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fc03fc000000004;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0001ffff0001;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010100000101;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffe00006aea;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc080800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc080800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7efefefe82010201;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x418181017dfefdff;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000003fe0000141e;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffc01ffffebe2;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffc;
+  __m128i_out = __lsx_vhsubw_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefeff00fefeff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefeff00fefeff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007e7e00007e7e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007e7e00007e7e;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x67eb85afb2ebb000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff7cffd6ffc700b0;
+  __m128i_out = __lsx_vhsubw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x67eb8590b2ebafe1;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4e3e133738bb47d2;
+  *((unsigned long*)& __m128i_result[1]) = 0xff98007a004d0050;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff9ff4a0057000e;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x56a09e662ab46b31;
+  *((unsigned long*)& __m128i_op1[0]) = 0xb4b8122ef4054bb3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4b47edd10bfab44d;
+  __m128i_out = __lsx_vhsubw_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000501ffff0005;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000600000001;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vhsubw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe593c8c4e593c8c4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff8000010f78;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff7f0080ff7ef088;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9727b8499727b849;
+  *((unsigned long*)& __m128i_op0[0]) = 0x12755900b653f081;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7d7f13fc7c7ffbf4;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff9727ffff9727;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffe79ffffba5f;
+  __m128i_out = __lsx_vhsubw_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020000ffff0001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000001;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_result[1]) = 0xffaeffadffaeffad;
+  *((unsigned long*)& __m128i_result[0]) = 0xffaeffadffaeffad;
+  __m128i_out = __lsx_vhsubw_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000014eb54ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0x14eb6a002a406a00;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80008a7555aa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a7535006af05cf9;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff758aaa56;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffa9fb0d07;
+  __m128i_out = __lsx_vhsubw_du_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa2e3a36363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa2e3a36463636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000a2e300006363;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000a2e300006363;
+  __m128i_out = __lsx_vhsubw_wu_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vhsubw_hu_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000155;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000f0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffff10000;
+  __m128i_out = __lsx_vhsubw_qu_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vaddwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fffffff80000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00003ffd000a4000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffcffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffd000a0000;
+  __m128i_out = __lsx_vaddwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0800080008000800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vaddwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff000000ff00ff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000049ffffff4d;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff01ffffffff;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000005e695e95;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5e695e96c396b402;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000005e94;
+  *((unsigned long*)& __m128i_result[0]) = 0x00005e96ffffb402;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffb;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000100000000fc;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000100000000fc;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000158;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000005d5d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000005d5d;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5c9c9c9ce3636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x63635c9e63692363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffe3636363;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000063692363;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0202020202020203;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0202020202020203;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000002020202;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000002020202;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1817161517161514;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1615141315141312;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x76f424887fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000017161515;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000095141311;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdfef9ff0efff900;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcfd000000fb00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fe00f8000700;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000fb01;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000007000000;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000080806362;
+  *((unsigned long*)& __m128i_op1[0]) = 0x807f808000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80806362;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ff00ff;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000010002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff960015;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000010002;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffff960015;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000047e59090;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffb8145f50;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00bbfff7fffffff7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff008ff820;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00bbfff7fffffff7;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff008ff820;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000011ff040;
+  __m128i_out = __lsx_vaddwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000100010001fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010001fffd;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffc2ffe700000007;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffc100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffc100010001;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80df00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000dfa6e0c6;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000d46cdc13;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000d46cdc13;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe813f00fe813f00;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffe;
+  __m128i_out = __lsx_vaddwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ca354688;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_result[1]) = 0x00040003ff83ff84;
+  *((unsigned long*)& __m128i_result[0]) = 0x00040003ff4dffca;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001f5400000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001f00000000;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000f80007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xb);
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff00000000;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffff0100ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffefffeffff;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x478b478b38031779;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6b769e690fa1e119;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001030103;
+  *((unsigned long*)& __m128i_result[1]) = 0x0047004700380017;
+  *((unsigned long*)& __m128i_result[0]) = 0x006bff9e0010ffe2;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80ffa2fff0ff74;
+  *((unsigned long*)& __m128i_result[0]) = 0xff76ffd8ffe6ffaa;
+  __m128i_out = __lsx_vaddwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f54e0ab00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001f5400000000;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa352bfac9269e0aa;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffd70b00006ea9;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffa352ffff9269;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa352bfac9269e0aa;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffd70b00006ea9;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffa352ffff9269;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe593c8c4e593c8c4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8144ffff01c820a4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9b2ee1a4034b4e34;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff80c400000148;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff80c1ffffe8de;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffefffffffe;
+  __m128i_out = __lsx_vaddwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa486c90f6537b8d7;
+  *((unsigned long*)& __m128i_op0[0]) = 0x58bcc2013ea1cc1e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffa486c90f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000058bcc201;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001802041b0013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001802041b0014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003004;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff02000200;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffdfff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffdfff;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fbf83468;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fbf83468;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff82bb9784;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffc6bb97ac;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007ffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001000fbff9;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002ff9afef;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000004f804f81;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000004f804f80;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000fff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe00029f9f6061;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x64e464e464e464e4;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffeffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000064e264e6;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0305030203020502;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0301030203020502;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000003050302;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000003010302;
+  __m128i_out = __lsx_vaddwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01fc020000fe0100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff0000ff0000;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00a6ffceffb60052;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff84fff4ff84fff4;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fefefe6a;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0032000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5a57bacbd7e39680;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6bae051ffed76001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf3e6586b60d7b152;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf7077b934ac0e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4e3e133738bb47d2;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000117d00007f7b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000093d0000187f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7d7f027f7c7f7c79;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7e7f7e7f027f032f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7d7f13fc7c7ffbf4;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffd3000000130000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffd3000000130000;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100010000ffda;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000016;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffbfbfbfc0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffbfbfbfc0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[1]) = 0xffbfffbfff7fff80;
+  *((unsigned long*)& __m128i_result[0]) = 0xffbfffbfff7fff80;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000808000020200;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff8000020000;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x413e276583869d79;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f7f017f9d8726d3;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7c7cd2eb63637c52;
+  *((unsigned long*)& __m128i_op1[0]) = 0x82ffd2210127add2;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc2007aff230027;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080005eff600001;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000011ff040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffe1ffc0;
+  __m128i_out = __lsx_vsubwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000004000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffc000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000d;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000ffff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000100c6ffef10c;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffff01;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffeff400000df4;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002050320;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001c88bf0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000320;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000007730;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001030103;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000103;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x33eac9fdca42f660;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaa472d26fe867091;
+  *((unsigned long*)& __m128i_op1[1]) = 0x33eac9fdca42f660;
+  *((unsigned long*)& __m128i_op1[0]) = 0xaa472d26fe867091;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsubwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vsubwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x05fafe0101fe000e;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff7a86;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffe01fff2;
+  __m128i_out = __lsx_vsubwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b1b106b8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffb81a6f70;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000047eba0b0;
+  __m128i_out = __lsx_vsubwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000c01020d8009;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000003004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000c01020d5005;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffff01ff01;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f804f804f804f80;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xb9fe3640e4eb1b18;
+  *((unsigned long*)& __m128i_op0[0]) = 0x800000005b4b1b18;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffb9fe00003640;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe4eb00001b18;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x80001b155b4b0000;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100080000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffefff80000;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fc03fc000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fc03fc000000003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f7f1fd800000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xc0411fe800000000;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01e41ffff0e440;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01e420fff0e442;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc485edbcc0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x003f000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007c000d00400000;
+  __m128i_out = __lsx_vsubwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x841f000fc28f801f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007c0000003e0080;
+  __m128i_out = __lsx_vsubwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vsubwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001001;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff8000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff8000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffc2ba;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000027f000000fe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe80000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000018000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff7a53;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff0000ff86;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffa6ff91fdd8ef77;
+  *((unsigned long*)& __m128i_op1[0]) = 0x061202bffb141c38;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000005a00000228;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff9ee000004ec;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000001fe02000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000001fe02000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb020302101b03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000002345454;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c0dec4ca;
+  *((unsigned long*)& __m128i_result[1]) = 0x000030ebffffffdc;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000203ffffff25;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x380fdfdfc0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffc7f100004000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005dcbe7e830c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000015d926c7;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000e41b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000005dcb;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00f0008100800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00f000807000009e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000ec382e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ec382d;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfcfcfcfcfcfc0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffcfcfcfc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffcfc6080;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffaefffbffaefffb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffaefffbffaefffb;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffc105d1aa;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffbc19ecca;
+  __m128i_out = __lsx_vsubwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000101fd01fe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff0000000ad3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff000fffff000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xefffdffff0009d3d;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff00ffffff01;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010001007c;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5f675e96e29a5a60;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe000100cf005f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x5e695e95e1cb5a01;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7efefefe82010201;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0403cfcf01c1595e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x837cd5db43fc55d4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007005200440062;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080005e007f00d8;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x061006100613030c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4d6814ef9c77ce46;
+  *((unsigned long*)& __m128i_result[1]) = 0x010f010f0112010b;
+  *((unsigned long*)& __m128i_result[0]) = 0x016701ee01760145;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffcafff8ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe6d4572c8a5835bc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe5017c2ac9ca9fd0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00d3012b015700bb;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001002affca0070;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000fea0000fffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363771163631745;
+  *((unsigned long*)& __m128i_op1[0]) = 0x636363ec6363636c;
+  *((unsigned long*)& __m128i_result[1]) = 0x006300fb00630143;
+  *((unsigned long*)& __m128i_result[0]) = 0x0063ffec0063006c;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c9c9c9c9c9c9c9d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080ffffffff8080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00008080ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80ffffffffff80;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff80ffffffff;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffac0a000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ac00000000;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00197f26cb658837;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01009aa4a301084b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_result[1]) = 0x0037ffd40083ffe5;
+  *((unsigned long*)& __m128i_result[0]) = 0x001e0052001ffff9;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf589caff5605f2fa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000014eb54ab;
+  *((unsigned long*)& __m128i_op1[0]) = 0x14eb6a002a406a00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000eb00ab;
+  *((unsigned long*)& __m128i_result[0]) = 0x017400ff004500fa;
+  __m128i_out = __lsx_vaddwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff00ffffff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000090900000998;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000900ffff98;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff8ffa2fffdffb0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff800000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x203e16d116de012b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000017d7000001e4;
+  *((unsigned long*)& __m128i_result[0]) = 0x000016d10000012b;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1e0200001e020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcfffcfffcfffd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffcfffdfffcfffd;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffcfffffffd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffdfffffffd;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff3fbfffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001c8520000c97d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001c8520001c87d;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff000000ff00;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010100000101;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000004;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffac0a000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000085af0000b000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00017ea200002000;
+  __m128i_out = __lsx_vaddwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa2f54a1ea2f54a1e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x00004a1e00004a1e;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000868686868686;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000868600008785;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e880ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe363636363abdf16;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000cecd00004657;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000c90000011197;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001000f000e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fff1000ffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000f000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000ffffe;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c07e181ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3430af9effffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000024;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000024;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000fe00ff;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00307028003f80b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0040007fff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000003f80b0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff800000;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00060012000e002b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000049ffffffaa;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000e002b;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffaa;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000bfffffffe0f6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff7a53;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001f;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffeff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffeff00;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000003dffc2;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080006b0000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ff00ff;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000055555555;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff7f810100001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000400530050ffa6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff007fff810001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000400530050ffa6;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffff811001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000a1ff4c;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000008000001e;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9611c3985b3159f5;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000035697d4e;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000013ecaadf2;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ef00ff010f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff010f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc1f03e1042208410;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001000110;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000431f851f;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff80ffffff80ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000018080807f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ffff80fe;
+  __m128i_out = __lsx_vaddwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff8000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff8000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff8000000000;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000000010000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000180100100000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000b5207f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001801b5307f80;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001300000013;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000030000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000030000003f;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff208fffffa02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff208fffffa02;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080000000;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffbfffffffbe;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fbf3fbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7ff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff8007;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x06b1213ef1efa299;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8312f5424ca4a07f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1f1f1f1f1f1f1f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f1f1f27332b9f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xa23214697fd03f7f;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x80000000ffffd860;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff80000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8000007f800000;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd27db010d20fbf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffd27db010d20fbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffa4fb6021a41f7e;
+  __m128i_out = __lsx_vaddwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000a16316b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x16161616a16316b0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ffffa10016;
+  *((unsigned long*)& __m128i_result[0]) = 0x01150115ffa10016;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x08fdc221bfdb1927;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4303c67e9b7fb213;
+  *((unsigned long*)& __m128i_op1[1]) = 0x08fdc221bfdb1927;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4303c67e9b7fb213;
+  *((unsigned long*)& __m128i_result[1]) = 0x00100184017e0032;
+  *((unsigned long*)& __m128i_result[0]) = 0x0086018c01360164;
+  __m128i_out = __lsx_vaddwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007e007e007e007e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff77777807777775;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe6eeef00eeeeeebf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000f00f;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff007700070077;
+  *((unsigned long*)& __m128i_result[0]) = 0x00e600ef00ee01de;
+  __m128i_out = __lsx_vaddwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000200020;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000003f;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vaddwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vaddwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000fe00fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe00fe00fe00fe;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000011ffee;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000dfff2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00e0000000e00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000e0000000e0;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff7100fffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ffffa10016;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01150115ffa10016;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100fe000070a1;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000115ffffffa1;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000030000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffe0000fffe;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff208fffffa02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000001000f00fe00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000017fff00fe7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffff00;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4429146a7b4c88b2;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe22b3595efa4aa0c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000442900007b4c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000e22b0000efa4;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000600000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000636500006363;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000a6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080800000808;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe0001fefc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fffe0001fefc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff8000010f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff8000010f78;
+  __m128i_out = __lsx_vaddwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x04faf60009f5f092;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04fafa9200000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff9fffefff9ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000004fa000009f5;
+  *((unsigned long*)& __m128i_result[0]) = 0x000004f3fffffff9;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa486083e6536d81d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x58bc43853ea123ed;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000a486083e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000058bc4385;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffc01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffc01;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001fffffffe;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010001fffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000100010001fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000020002;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000006e17bfd8;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000006e17bfd8;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f000400000003;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f000400000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000400004;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000003f0004;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd1c0c0a5baf8f8d3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xecbbbbc5d5f3f3f3;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffaefffbffaefffb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffaefffbffaefffb;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000d16fc0a0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ec6abbc0;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffff000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000d00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffef;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000c;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000017f800001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000017f800001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007f800001;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007f800001;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000080000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000808ff821;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vaddwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000c2fa8000c2fa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000c2f90000bafa;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000003fffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001fff00001fff;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x379674c000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3789f68000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x379674c000000000;
+  __m128i_out = __lsx_vaddwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000807bf0a1f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000800ecedee68;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5847b72626ce61ef;
+  *((unsigned long*)& __m128i_op1[0]) = 0x110053f401e7cced;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x5847bf2de5d8816f;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000155;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ff00000083;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xff01ff010000ff7d;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000fffc;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff00fc0000ff02;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01ff040000fffe;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x21011f3f193d173b;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff39ff37ff35ff33;
+  *((unsigned long*)& __m128i_result[1]) = 0x00fe008e009e0071;
+  *((unsigned long*)& __m128i_result[0]) = 0x001c006f00c4008d;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9ca19d509ae734;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd1b09480f2123460;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001fffeff98;
+  *((unsigned long*)& __m128i_result[0]) = 0x0014ffe4ff76ffc4;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x34947b4b11684f92;
+  *((unsigned long*)& __m128i_op1[0]) = 0xee297a731e5c5f86;
+  *((unsigned long*)& __m128i_result[1]) = 0xff6cffb5ff98ff6e;
+  *((unsigned long*)& __m128i_result[0]) = 0xffd7ff8dffa4ff7a;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff3ea5016b;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffefffe3f6fb04d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000d96f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001ffffd83b;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000f0009d3c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000016fff9d3d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000bd0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000007f0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000916c;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010000954d;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010000fe01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000050000007b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000500000005;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffbffffff85;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffc0000fdfc;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000032;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000032;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80df00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xa5c4c774856ba837;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2a569f8081c3bbe9;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff6080ffff4417;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000063b2ac27;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffaa076aeb;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0001ffff9515;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00060fbf00040fbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020fbf00000fbf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffac5cffffac5c;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffac5cffffac5c;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffaefffbffaefffb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffaefffbffaefffb;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0005ffff0005;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000500000004;
+  __m128i_out = __lsx_vsubwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000a1630000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000a1630000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001fd0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000001fd0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff7ffffef77fffdd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf77edf9cffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000008800022;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000001;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffda6f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffe3d7;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffda6e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffe3d6;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000807f00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80006b0080808080;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff00011cf0c569;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0000002b0995850;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffe30f3a97;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffcfe72830;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff9f5c25;
+  *((unsigned long*)& __m128i_op0[0]) = 0x58fa6b4000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ff9f5c25;
+  *((unsigned long*)& __m128i_op1[0]) = 0x58fa6b4000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcda585aebbb2836a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffc4cdfd16;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6453f5e01d6e5000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fdec000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x801dd5cb0004e058;
+  *((unsigned long*)& __m128i_op0[0]) = 0x77eb15638eeb5fc2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000200000001b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000004e03d;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000008eeb5fc2;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000c7fff000c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000100c6ffef00d;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000c00000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000bfffffffe0f6;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffdfffcfffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffdfffcfffd;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff7e00000081;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff6080ffff4417;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0aa9890a0ac5f3;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x36fbdfdcffdcffdc;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffefffeffff;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a753500a9fa0d06;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf589caff5605f2fa;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x087c000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000087c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f8000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001000010f8;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffefffff784;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000000000000;
+  __m128i_out = __lsx_vsubwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe07e5fefefdddfe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00020100fedd0c00;
+  *((unsigned long*)& __m128i_result[1]) = 0xff02ff1bff02ff23;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffff02fff4;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefff6fff80002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x82c53a0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc72ef153fc02fdf7;
+  *((unsigned long*)& __m128i_result[1]) = 0x007d00c500ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0038000e0003ff03;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007f000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007f000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000040000000400;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff800000000000;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fd1400010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe00fd1400010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff017fffff017f;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff017fffff017f;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c7c266e71768fa4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00009c7c00007176;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vsubwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000009;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000408;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff0007e215b122;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ffeffff7bfff828;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80010001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff80010001;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000af555555555;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000af555555555;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000af5;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000af5;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2e34594c3b000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000002e34594c;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsubwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000036280001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x42a0000042a02001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000036280001;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd0b1ffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9d519ee8d2d84f1d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8644ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4a6d0000ffff0000;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x82c539ffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc72df14afbfafdf9;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7d3ac60000000000;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00fe00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010000;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000fffffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000102020204000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefff00000001fff;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0003000300000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003000300a10003;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffd00000000;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x6363636163636363;
+  __m128i_out = __lsx_vsubwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0403cfcf01c1595e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x837cd5db43fc55d4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007005200440062;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080005e007f00d8;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffcafff8ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe6d4572c8a5835bc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe5017c2ac9ca9fd0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00d3012b015700bb;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001002affca0070;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000fea0000fffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363771163631745;
+  *((unsigned long*)& __m128i_op1[0]) = 0x636363ec6363636c;
+  *((unsigned long*)& __m128i_result[1]) = 0x006300fb00630143;
+  *((unsigned long*)& __m128i_result[0]) = 0x0063ffec0063006c;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9c9c9c9c9c9c9c9d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080ffffffff8080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00008080ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80ffffffffff80;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff80ffffffff;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00197f26cb658837;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01009aa4a301084b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_result[1]) = 0x0037ffd40083ffe5;
+  *((unsigned long*)& __m128i_result[0]) = 0x001e0052001ffff9;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff00ffffff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000090900000998;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000900ffff98;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff8ffa2fffdffb0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff800000;
+  __m128i_out = __lsx_vaddwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1e0200001e020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcfffcfffcfffd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffcfffdfffcfffd;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffcfffffffd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffdfffffffd;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff000000ff00;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010100000101;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000004;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa2f54a1ea2f54a1e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x00004a1e00004a1e;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000868686868686;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000868600008785;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e880ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe363636363abdf16;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000cecd00004657;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000c90000011197;
+  __m128i_out = __lsx_vaddwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001000f000e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fff1000ffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000f000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000ffffe;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c07e181ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3430af9effffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000fe00ff;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00060012000e002b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000049ffffffaa;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000e002b;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffaa;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000bfffffffe0f6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff7a53;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff7f80ffff7f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffeff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffeff00;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000003dffc2;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080006b0000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ff00ff;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000055555555;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff7f810100001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000400530050ffa6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff007fff810001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000400530050ffa6;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffff811001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000a1ff4c;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000008000001e;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9611c3985b3159f5;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000035697d4e;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000013ecaadf2;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ef00ff010f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff010f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc1f03e1042208410;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001000110;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000431f851f;
+  __m128i_out = __lsx_vaddwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000030000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000030000003f;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffbfffffffbe;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x06b1213ef1efa299;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8312f5424ca4a07f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1f1f1f1f1f1f1f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f1f1f27332b9f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xa23214697fd03f7f;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x80000000ffffd860;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff80000000;
+  __m128i_out = __lsx_vaddwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000a16316b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x16161616a16316b0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ffffa10016;
+  *((unsigned long*)& __m128i_result[0]) = 0x01150115ffa10016;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007e007e007e007e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000200020;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000003f;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000fe00fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe00fe00fe00fe;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000011ffee;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000dfff2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vaddwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00e0000000e00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000e0000000e0;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff7100fffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ffffa10016;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01150115ffa10016;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100fe000070a1;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000115ffffffa1;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffe0000fffe;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff208fffffa02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000001000f00fe00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000017fff00fe7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffff00;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x04faf60009f5f092;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04fafa9200000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff9fffefff9ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000004fa000009f5;
+  *((unsigned long*)& __m128i_result[0]) = 0x000004f3fffffff9;
+  __m128i_out = __lsx_vaddwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c2f90000bafa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000c2fa8000c2fa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000c2f90000bafa;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000003fffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001fff00001fff;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000807bf0a1f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000800ecedee68;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5847b72626ce61ef;
+  *((unsigned long*)& __m128i_op1[0]) = 0x110053f401e7cced;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x5847bf2de5d8816f;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000155;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vaddwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001802041b0014;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000c01020d8009;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5641127843c0d41e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfedb27095b6bff95;
+  *((unsigned long*)& __m128i_op1[1]) = 0xa8beed87bc3f2be1;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0024d8f6a494006a;
+  *((unsigned long*)& __m128i_result[1]) = 0xff7fffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff7fffffffffffff;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007fff8000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001008100000005;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000f000e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000ffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x003fffff00070007;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000007ffff;
+  __m128i_out = __lsx_vavg_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00007fff;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80ff0010ff06;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xedfaedfaedfaedfa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf6fd377cf705f680;
+  *((unsigned long*)& __m128i_result[0]) = 0xc0000000bfff8000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fd1400010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fff7fc01;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe00fe8980000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff007e8a7ffc7e00;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0100000001000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff732a;
+  *((unsigned long*)& __m128i_result[1]) = 0x807f7fff807f807f;
+  *((unsigned long*)& __m128i_result[0]) = 0x807f807f7fff3995;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fffffff;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m128i_out = __lsx_vavg_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x1ff800000000477f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000015fec9b0;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f80000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f80000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x1fc0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1fc07f8000007f80;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff00010000fff;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000037;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffcfffcfffc;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000043cf26c7;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000e31d4cae8636;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000021e79364;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000718ea657431b;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ff8000000000000;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000003fbf3fbf;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff7fff7fff7ff8;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4050000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x2028000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff46000000ba;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffa30000005c;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x353c8cc4b1ec5b09;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080008000808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x1a9e466258f62d84;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000070007;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000007ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000068;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000038003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000040033;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x003fffff00000000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff0000ac26;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_result[0]) = 0x007f800000000000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000040000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000040000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fc000005fc00000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fc000005fc00000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000002ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000017fffffff;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ac;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101030100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080800000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080818000008000;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000002;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400028000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000020001c020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000022;
+  __m128i_out = __lsx_vavg_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m128i_out = __lsx_vavg_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000008000;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0017004800c400f9;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ed001a00580070;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x800b7fe38062007b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0076800d802c0037;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000014155445;
+  *((unsigned long*)& __m128i_result[1]) = 0x33f5c2d7d9f5d800;
+  *((unsigned long*)& __m128i_result[0]) = 0xe4c23ffb002a3a22;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m128i_out = __lsx_vavg_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd6a09e662ab46b31;
+  *((unsigned long*)& __m128i_op0[0]) = 0x34b8122ef4054bb3;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xeb504f33155a3598;
+  *((unsigned long*)& __m128i_result[0]) = 0x1a5c0917fa02a5d9;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavg_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c9c9c00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x4e4e4e4e00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x08080807f5f5f5f8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x04040403fafafafc;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff80;
+  __m128i_out = __lsx_vavg_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000200000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000000;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80ffa2fff0ff74;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff76ffd8ffe6ffaa;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xe01ae8a3fc55dd23;
+  *((unsigned long*)& __m128i_result[0]) = 0xdd9ff64ef9daeace;
+  __m128i_out = __lsx_vavg_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000868686868686;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1e1e1e1e1e1e1e1e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1e1e1e1e1e1e1e1e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0f0f0f0f0f0f0f0f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f0f525252525252;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000100000001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x37b951002d81a921;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000047404f4f040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000082000000826;
+  *((unsigned long*)& __m128i_result[0]) = 0x1b5c4c203e685617;
+  __m128i_out = __lsx_vavg_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000014eb54ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0x14eb6a002a406a00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffdfdc0d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a753500950fa306;
+  __m128i_out = __lsx_vavg_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff80ffff7e02;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00feff8000ff80ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf931fd04f832fe02;
+  *((unsigned long*)& __m128i_result[1]) = 0x80007fc000003f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x7d187e427c993f80;
+  __m128i_out = __lsx_vavg_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128i_result[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a975be00accf03;
+  __m128i_out = __lsx_vavg_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff0001fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0000ffff;
+  __m128i_out = __lsx_vavg_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f8000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001000010f8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x087c000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000087c;
+  __m128i_out = __lsx_vavg_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0177fff0fffffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000011ff8bc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffefffff784;
+  *((unsigned long*)& __m128i_result[1]) = 0x00bbfff7fffffff7;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff008ff820;
+  __m128i_out = __lsx_vavg_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc1bdceee242071db;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe8c7b756d76aa578;
+  *((unsigned long*)& __m128i_result[1]) = 0xe0dee7779210b8ed;
+  *((unsigned long*)& __m128i_result[0]) = 0xf463dbabebb5d2bc;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0040000000ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0040000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0020000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0020c00000000000;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fc000003fc00000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fc000003fc00000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000004000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff8004000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffc002000000000;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffc002000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffc002000000000;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003ff000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fffc00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc001fffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001ff800000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ffe800e80000000;
+  __m128i_out = __lsx_vavgr_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff80000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff01018888;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001ffffffff;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff2;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff2;
+  __m128i_out = __lsx_vavgr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000073;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000002a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000003a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000015;
+  __m128i_out = __lsx_vavgr_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4080808080808080;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000010000003f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000010000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010000003f;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fffffffc0000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0xff807f807f807f80;
+  *((unsigned long*)& __m128i_result[0]) = 0xff807f807f807f80;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000400400004004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000015ff4a31;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xb9fe3640e4eb1b18;
+  *((unsigned long*)& __m128i_op0[0]) = 0x800000005b4b1b18;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xdcfe1b20f2f60e0c;
+  *((unsigned long*)& __m128i_result[0]) = 0xc00000002e260e0c;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x111110ff11111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfbffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7bffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x060808ff08080820;
+  *((unsigned long*)& __m128i_result[0]) = 0x4608081808080810;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1817161517161514;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1615141315141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x0c0c8b8a8b8b0b0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x8b8a8a898a8a8909;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2a7b7c9260f90ee2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1b1c6cdfd57f5736;
+  *((unsigned long*)& __m128i_result[1]) = 0x153e3e49307d0771;
+  *((unsigned long*)& __m128i_result[0]) = 0x0d8e36706ac02b9b;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000280000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000140001;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001c88bf0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001c88bf0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001c88bf0;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff46;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe00fe0045;
+  *((unsigned long*)& __m128i_result[1]) = 0x007f007f007f007e;
+  *((unsigned long*)& __m128i_result[0]) = 0x007f007f007effc6;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x807fffff80800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x8003000000020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4040ffffc0400004;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101000001000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000008000008080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080800000800080;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000fff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000208000002080;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000ac26;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000003000000d613;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c0000000;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff00000000;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff000001ffff9515;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007fffa9ed;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8000017fffca8b;
+  __m128i_out = __lsx_vavgr_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffd60015;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x80808080806b000b;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffdfffffff8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7ffffffc;
+  __m128i_out = __lsx_vavgr_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xdd6156076967d8c9;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2e3ab5266375e71b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x6eb12b0634b46c67;
+  *((unsigned long*)& __m128i_result[0]) = 0x171d5a9531bb7390;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000007fff0018;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000003fff800c;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff81010102;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_result[0]) = 0xe4423f7b769f8ffe;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vavgr_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffeff98;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0014ffe4ff76ffc4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000017fff7fcc;
+  *((unsigned long*)& __m128i_result[0]) = 0x18a3188b9854187b;
+  __m128i_out = __lsx_vavgr_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000003ff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff9dff9dff9dff9d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffceffceffcf1fcb;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0280000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7500000075000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7500000075000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3bc000003a800000;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007d1800007c99;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0a0000001e000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a621b3ebe5e1c02;
+  *((unsigned long*)& __m128i_result[1]) = 0x04ffc0000f000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x05314c2bdf2f4c4e;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000090900000998;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007a8000000480;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000485000004cc;
+  __m128i_out = __lsx_vavgr_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3bc000003a800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe7fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x1d4000001d400000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1e5f007f5d400000;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001000000000;
+  __m128i_out = __lsx_vavgr_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000400000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000007f80;
+  __m128i_out = __lsx_vavgr_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa8beed87bc3f2be1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0024d8f6a494006a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x5641127843c0d41e;
+  *((unsigned long*)& __m128i_result[0]) = 0xfedb27095b6bff95;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000383ffff1fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ca354688;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000038335ca2777;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vabsd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fff80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fff80000;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfda9b23a624082fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001010000;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffcfb799f1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0282800002828282;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5555001400005111;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffabbeab55110140;
+  *((unsigned long*)& __m128i_result[1]) = 0xaaaaffebcfb748e0;
+  *((unsigned long*)& __m128i_result[0]) = 0xfd293eab528e7ebe;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffd000700000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0014fff500000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f03000780000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f15000a7f010101;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400040004000400;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x56a09e662ab46b31;
+  *((unsigned long*)& __m128i_op0[0]) = 0xb4b8122ef4054bb3;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x56a09e662ab46b31;
+  *((unsigned long*)& __m128i_result[0]) = 0xb4b8122ef4054bb3;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3c600000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xc39fffff007fffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000fe00fd;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff000000ff000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff000000ff000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff000000ff000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff000000ff000000;
+  __m128i_out = __lsx_vabsd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21f32eafa486fd38;
+  *((unsigned long*)& __m128i_op0[0]) = 0x407c2ca3d3430357;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x21f32eaf5b7a02c8;
+  *((unsigned long*)& __m128i_result[0]) = 0x407c2ca32cbd0357;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000006;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001fffffff9;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x64b680a2ae3af8c8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x161c0c363c200824;
+  *((unsigned long*)& __m128i_result[1]) = 0x23b57fa16d39f7c8;
+  *((unsigned long*)& __m128i_result[0]) = 0x161c0c363c200824;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000060000000e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000127fffffea;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f0101070101010f;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000127f010116;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8006000080020000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8004000080020000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8006000080020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8004000080020000;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fffff0000000000;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1ffffffff8001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf0bd80bd80bd8000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1ffffffff8001000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf0bd80bd80bd8000;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003bfb4000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000003bfb4000;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100010001;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffd000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001fd0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001fd0;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100013fa0;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01fe0400000006;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000005fffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe01fc0005fff4;
+  __m128i_out = __lsx_vabsd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffef8;
+  *((unsigned long*)& __m128i_result[0]) = 0xffdfffdfffdffee0;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffdf;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000021;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ff08ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ff08ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff0;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x13f9c5b60028a415;
+  *((unsigned long*)& __m128i_op0[0]) = 0x545cab1d7e57c415;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x13f9c5b60028a415;
+  *((unsigned long*)& __m128i_result[0]) = 0x545cab1d81a83bea;
+  __m128i_out = __lsx_vabsd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff0015172b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffb00151727;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vabsd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x010003f00000ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x017f03000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x010003f00000ff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x017f03000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffbfffffff8;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffbfffffff8;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffbffffffd8;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffbfffffff8;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000700000004e000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003000000012020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0038000000051fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x003c000000022021;
+  __m128i_out = __lsx_vabsd_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefff00000001fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffe1ffc100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000400000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffe1ffc100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefff00000401fff;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000001fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000001ffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vabsd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67eb85af0000b000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x67157b5100005000;
+  *((unsigned long*)& __m128i_result[0]) = 0x387c7e0a133f2000;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffac0a000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffac0a000000;
+  __m128i_out = __lsx_vabsd_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff7fffefffa01ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffbfffefffe01ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0305030203020502;
+  *((unsigned long*)& __m128i_result[0]) = 0x0301030203020502;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4ee376188658d85f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5728dcc85ac760d2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4e1d76187a58285f;
+  *((unsigned long*)& __m128i_result[0]) = 0x572824385a39602e;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9d9b9bbfaa20e9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbe081c963e6fee68;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fd1654860000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6363636463abdf17;
+  *((unsigned long*)& __m128i_result[0]) = 0x41f8e08016161198;
+  __m128i_out = __lsx_vabsd_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b1b106b8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0x0a545374471b7070;
+  *((unsigned long*)& __m128i_result[0]) = 0x274f4f0648145f50;
+  __m128i_out = __lsx_vabsd_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[0]) = 0x52527d7d52527d7d;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffc001f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010202050120;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010102020202;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000700020005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000700020005;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f8000004f800000;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003000300030004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000300030004;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5c9c9c9ce3636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x63635c9e63692363;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf0fd800080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000a00028004000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6b9fe3649c9d6363;
+  *((unsigned long*)& __m128i_result[0]) = 0x6363bc9e8b696363;
+  __m128i_out = __lsx_vadda_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111113111111131;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111113111111131;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000006a9a5c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000092444;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000006a9a5c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000092444;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000d4ccb8;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000124888;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x76f424887fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff082f000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000f7d1000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x773324887fffffff;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5a6f5c53ebed3faa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa36aca4435b8b8e1;
+  *((unsigned long*)& __m128i_result[1]) = 0x5a6f61865d36d3aa;
+  *((unsigned long*)& __m128i_result[0]) = 0x7bea6962a0bfb621;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000008140c80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000008140c80;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000fffe0000ff45;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff000000b9;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffd5002affffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x343d8dc6b0ed5a08;
+  *((unsigned long*)& __m128i_result[1]) = 0x012b012c01010246;
+  *((unsigned long*)& __m128i_result[0]) = 0x353e743b50135a4f;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003c853c843c87e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003c853c843c87e;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffe000ffdf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000200000002001;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000001fff0021;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010109;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000005452505;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000004442403e4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffe0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000005452505;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000044525043c;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5d7f5d807fea807f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbafebb00ffd500fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000208000002080;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003f0000003f0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f0000003f0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x803e0000803e0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x803e0000803e0000;
+  __m128i_out = __lsx_vadda_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000008000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000008000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000008000;
+  __m128i_out = __lsx_vadda_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001400000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff9000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc000400000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007001400000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004001000000000;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefeff00fefeff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefeff00fefeff00;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vadda_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000024170000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000020300000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000044470000;
+  __m128i_out = __lsx_vadda_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff01ff01ac025c87;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff01ff01ac465ca1;
+  *((unsigned long*)& __m128i_result[1]) = 0x64616462b76106dc;
+  *((unsigned long*)& __m128i_result[0]) = 0x64616462b71d06c2;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffaeffaeffaeffae;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffaeffaeffaeffae;
+  *((unsigned long*)& __m128i_result[1]) = 0x0051005200510052;
+  *((unsigned long*)& __m128i_result[0]) = 0x0051005200510052;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3bc000003a800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4480000044800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x45c0000044800000;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[0]) = 0x6363636463636363;
+  __m128i_out = __lsx_vadda_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0040000000ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0040000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0040000000ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0040000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000007f0000;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80ff0010ff06;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007f01000eff0a;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff80ff0010ff06;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2002040404010420;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101010180800101;
+  *((unsigned long*)& __m128i_result[1]) = 0x2002040404010420;
+  *((unsigned long*)& __m128i_result[0]) = 0x9c9c9c9c80800101;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xb327b9363c992b2e;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa1e7b475d925730f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff3c992b2e;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff730f;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000ff0000;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff0;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x000a000a000a000a;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000001ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000001ff;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000010000003f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f007f007f007f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010000003f;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ff0000000007fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000002bfd9461;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff0000000ad3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff000fffff000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffff00010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff000fffff000;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000010000f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000010000f01;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffdfffcfffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001f;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa2a2a2a3a2a2a2a3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc605c000aedd0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xc605c000aedd0000;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffffdfffdf;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffdf;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf001f0010101f002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff80df00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010100000100000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100000101000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010100000100000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100000101000;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x14ccc6320076a4d2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x685670d27e00682a;
+  *((unsigned long*)& __m128i_result[1]) = 0x14ccc6320076a4d2;
+  *((unsigned long*)& __m128i_result[0]) = 0x685670d27e00682a;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x09e8e9012fded7fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x479f64b03373df61;
+  *((unsigned long*)& __m128i_result[1]) = 0x09e8e9012fded7fd;
+  *((unsigned long*)& __m128i_result[0]) = 0x479f64b03373df61;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x77c0404a4000403a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x77c03fd640003fc6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000003a0000003a;
+  *((unsigned long*)& __m128i_result[1]) = 0x77c0404a4000403a;
+  *((unsigned long*)& __m128i_result[0]) = 0x77c03fd640003fc6;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000003d0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000003d0000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbafebb00ffd500fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbafebb00ffd500fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbafebb00ffd500fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_result[1]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_result[0]) = 0x52525252adadadad;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op0[0]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5b5b5b5aa4a4a4a6;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x5b5b5b5aadadadad;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080700000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007001400000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000053a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000700140000053a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004001000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000b3a6000067da;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00004e420000c26a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd78cfd70b5f65d76;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5779108fdedda7e4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000b3a6000067da;
+  *((unsigned long*)& __m128i_result[0]) = 0x5779108f0000c26a;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmax_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vmax_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000034;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80c400000148;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff80c1ffffe8de;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000148;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000034;
+  __m128i_out = __lsx_vmax_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x30eb020302101b03;
+  *((unsigned long*)& __m128i_op1[0]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_result[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_result[0]) = 0x020310edc003023d;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe03fe01fe01fe01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe3bfa3ffe3bfb21;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001d001d001d001d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001d001d001d0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001d001d001d001d;
+  *((unsigned long*)& __m128i_result[0]) = 0x001d001d001d0000;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe03fe3ffe01fa21;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000f50000007500;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007e1600007d98;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000fe00fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000f50000fe75fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe7efe00fe7dfe;
+  __m128i_out = __lsx_vmax_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000051649b6;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd2f005e44bb43416;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000003e0000003f;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000051649b6;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000003e0000003f;
+  __m128i_out = __lsx_vmax_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00012c8a0000a58a;
+  __m128i_out = __lsx_vmax_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_result[0]) = 0x27b169bbb8145f50;
+  __m128i_out = __lsx_vmax_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000155;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000155;
+  __m128i_out = __lsx_vmax_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8493941335f5cc0c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x625a7312befcb21e;
+  *((unsigned long*)& __m128i_result[1]) = 0x8493941300000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000002befcb21e;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000210011084;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000017f0a82;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000040100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffff2382;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000040100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000a16316b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000063636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000a1630000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c63636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x9c9c9c9c00000000;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7da9b23a624082fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001010000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0505050505050505;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000005050000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0028280000282800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000282800;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffc0ff81000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000600000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffc0ff81000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000078c00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000d;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfc01fd13fc02fe0c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fd14fe01fd16;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff0000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe00fd1400010000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000401000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5a5a5a5a5b5a5b5a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5a5a5a5a5b5a5b5a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x000a000a000a000a;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fdffffffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe80000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe80ffffffffff02;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe80ffffffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x027e0000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe80ffffffffff02;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x027c027c000027c0;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff000000ff00;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff91fffffff5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff00650001ffb0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000067400002685;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ff91fffffff5;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00650000ff85;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000de0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000006f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001f0a;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa2a2a2a3a2a2a2a3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc605c000aedd0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5d5d5d5d5d5d5d5d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5d5d5d5d5d5d0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xa2a2a2a3a2a2a2a3;
+  *((unsigned long*)& __m128i_result[0]) = 0xc605c000aedd0000;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207f7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f417f417f027e03;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m128i_result[0]) = 0x2020202020207e03;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffcafff8ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000a0;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2000200020002000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100013fa0;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000fea0000fffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff0cff78ff96ff14;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00008d3200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x09e8e9012fded7fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x479f64b03373df61;
+  *((unsigned long*)& __m128i_result[1]) = 0x00008d3200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000003000000d613;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c0000000;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c0b0a090b0a0908;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a09080709080706;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0c0b0a090b0a0908;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a09080709080706;
+  *((unsigned long*)& __m128i_result[1]) = 0x0c0b0a090b0a0908;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a09080709080706;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000300000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff007fff810001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000400530050ffa6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff7f810100001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001fffc0ffffe001;
+  *((unsigned long*)& __m128i_result[1]) = 0xff7f810100001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000400530050ffa6;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe0004fffe0004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000010000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000005003a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3e25c8317394dae6;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcda585aebbb2836a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xcda585aebbb2836a;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xa87745dbd93e4ea1;
+  *((unsigned long*)& __m128i_op1[0]) = 0xaa49601e26d39860;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000200000001b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_result[0]) = 0x377b810912c0e000;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_result[1]) = 0xfcfcfcfcfcfcfcfd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfcfcfcfcfcfcfcfd;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9611c3985b3159f5;
+  *((unsigned long*)& __m128i_result[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_result[0]) = 0x9611c3985b3159f5;
+  __m128i_out = __lsx_vmin_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_result[0]) = 0xf9796558e39953fd;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_result[0]) = 0xf9796558e39953fd;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffe0000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff800000ff800000;
+  __m128i_out = __lsx_vmin_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_result[1]) = 0x2006454652525252;
+  *((unsigned long*)& __m128i_result[0]) = 0x2006454652525252;
+  __m128i_out = __lsx_vmin_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_result[0]) = 0xbbc8ecc5f3ced5f3;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007efe7f7f8000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000b81c8382;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000077af9450;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000077af9450;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc090380000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc090380000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xc090380000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000008680f1ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80ffffff80ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xff80ffff8680f1ff;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmin_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_result[0]) = 0x27b169bbb8145f50;
+  __m128i_out = __lsx_vmin_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01e41ffff0e440;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe4ffffffe4ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe4fffff0e4ff;
+  __m128i_out = __lsx_vmin_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmin_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xcf00000000000000;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000011;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vmaxi_d(__m128i_op0,2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe07e5fefefdddfe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020100fedd0c00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000201000000000b;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001c;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001c;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100000004;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000007f00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000007f00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000000;
+  __m128i_out = __lsx_vmaxi_d(__m128i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000800000008;
+  __m128i_out = __lsx_vmaxi_wu(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000d;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001600000016;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001600000016;
+  __m128i_out = __lsx_vmaxi_wu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff489b693120950;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffc45a851c40c18;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000a;
+  __m128i_out = __lsx_vmaxi_d(__m128i_op0,10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0011001100110011;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0a0a0a0a0a0a0a;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000020002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_result[0]) = 0x0303030303030303;
+  __m128i_out = __lsx_vmaxi_bu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaxi_wu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f000d200e000c20;
+  *((unsigned long*)& __m128i_result[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f000d200e000c20;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x43d3e0000013e000;
+  *((unsigned long*)& __m128i_result[0]) = 0x43d3e0000013e000;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0x000fffefffefffef;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x027c027c000027c0;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,-6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000001fc00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000001fc00000000;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111111111111111;
+  __m128i_out = __lsx_vmaxi_bu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100010001000;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001f0a;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003be14000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000003bfb4000;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000050000007b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000500000005;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000004;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0005000500050005;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0b0b0b0b0b0b0b0b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0b0b0b0b0b0b0b0b;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111111111111111;
+  __m128i_out = __lsx_vmaxi_bu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0a0a0a0a0a0a0a;
+  __m128i_out = __lsx_vmaxi_bu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x001fffff001fffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x001fffff001fffff;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_result[0]) = 0x1f5533a694f902c0;
+  __m128i_out = __lsx_vmaxi_wu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007ffffffb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x010101017f010101;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x37c0001000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x37c0001000000001;
+  __m128i_out = __lsx_vmaxi_wu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0b0b0b0b0b0b0b0b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0b0b0b0b0b0b0b0b;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001d;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001d;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x001d001d20000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x001d001d20000020;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000fff;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000b0000000b;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001b;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000c;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000007ff000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000a1ff4c;
+  *((unsigned long*)& __m128i_result[1]) = 0x000300037ff000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000300a10003;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000e0000000e;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003fff00010000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00123fff00120012;
+  *((unsigned long*)& __m128i_result[0]) = 0x0012001200120012;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000100010;
+  __m128i_out = __lsx_vmaxi_wu(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_d(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0606060606060606;
+  *((unsigned long*)& __m128i_result[0]) = 0x0606060606060606;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000900000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000900000009;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000600000006;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001a001a001a001a;
+  *((unsigned long*)& __m128i_result[0]) = 0x001a001a001a001a;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x000b000b000b000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000b000b000b000b;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001e001e001e001e;
+  *((unsigned long*)& __m128i_result[0]) = 0x001e001e001e001e;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vmaxi_du(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001d001d001d001d;
+  *((unsigned long*)& __m128i_result[0]) = 0x001d001d001d001d;
+  __m128i_out = __lsx_vmaxi_hu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004000400040004;
+  __m128i_out = __lsx_vmaxi_h(__m128i_op0,4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaxi_b(__m128i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e880ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_result[0]) = 0x41f8e880ffffffff;
+  __m128i_out = __lsx_vmaxi_d(__m128i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f80000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f80000000000007;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000700000007;
+  __m128i_out = __lsx_vmaxi_w(__m128i_op0,7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000900000009;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff4fffffff4;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff4fffffff4;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffefffffffc;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000a163000016b0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0303000103030001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000030300000303;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff6fff6fff6fff6;
+  __m128i_out = __lsx_vmini_h(__m128i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd8248069ffe78077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0d0d0d0d0d0d0d0d;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff7;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7da9b23a624082fd;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0505050505050505;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000005050000;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001f;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_hu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000000010000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100100000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff1;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff1;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000034;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000006;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1716151416151413;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1514131214131211;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff3fff3fff3fff3;
+  __m128i_out = __lsx_vmini_h(__m128i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000e0000000e;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000006;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00a6ffceffb60052;
+  *((unsigned long*)& __m128i_result[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff0;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff9;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff9;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff3fffffff3;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffefffef;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040004000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0bd80bd80bd80bd8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0bd80bd80bd80bd8;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000000b57ec564;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000083ff0be0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0014000000140014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0014000000140014;
+  __m128i_out = __lsx_vmini_hu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0013001300130013;
+  *((unsigned long*)& __m128i_result[0]) = 0x0013001300130013;
+  __m128i_out = __lsx_vmini_hu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00002f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958aefff895e;
+  *((unsigned long*)& __m128i_result[1]) = 0xfafafafafafafafa;
+  *((unsigned long*)& __m128i_result[0]) = 0xfafa958aeffa89fa;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,-6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff0000007f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x111110ff11111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111100;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01fe0400000006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000500000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01fe0400000005;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefefefefefefefe;
+  __m128i_out = __lsx_vmini_h(__m128i_op0,2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x55aa55c3d5aa55c4;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaa55556fd5aaaac1;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m128i_result[0]) = 0xaa55556fd5aaaac1;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffafffffffa;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffafffffffa;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,-6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fbf3fbf00007fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000e0000000e;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_h(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000d0000000d;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_hu(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff4;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000003fc00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fe01fe00;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000a;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffb;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffb;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001300000013;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000900000009;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vmini_hu(__m128i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_result[1]) = 0xfbfbfbfbadadadad;
+  *((unsigned long*)& __m128i_result[0]) = 0xfbfbfbfbadadadad;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_h(__m128i_op0,11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x345002920f3017d6;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff7fffffff7;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff7fffffff7;
+  __m128i_out = __lsx_vmini_w(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_result[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfcfcfcdcfcfcfcdc;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001030103;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffc;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000085af0000b000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00017ea200002000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff7;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000001fffdfffdff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000001fffdfffdff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010101010101;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000009c007c00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000071007600;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000009000900;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000009000900;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000d3460001518a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000084300000e55f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000016;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_h(__m128i_op0,3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_result[0]) = 0x0303030303030303;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf1f1f1f1f1f1f1f1;
+  *((unsigned long*)& __m128i_result[0]) = 0xf1f1f1f1f1f1f1f1;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3220000d3f20000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8bff0000a7b80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0909000009090000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0909000009090000;
+  __m128i_out = __lsx_vmini_bu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000f50000007500;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007e1600007d98;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000f50000000900;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000090900000998;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff2356fe165486;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5efeb3165bd7653d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000007;
+  __m128i_out = __lsx_vmini_du(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xb7032c34093d35ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe7a6533b800001b8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000900000009;
+  __m128i_out = __lsx_vmini_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x02b010f881a281a2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200020002;
+  __m128i_out = __lsx_vmini_hu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf1f181a2f1f1f1b0;
+  *((unsigned long*)& __m128i_result[0]) = 0xf1f1f1f1f180f1f1;
+  __m128i_out = __lsx_vmini_b(__m128i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff4;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01e41ffff0ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01e41ffff0ffff;
+  __m128i_out = __lsx_vmini_d(__m128i_op0,14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x54feed87bc3f2be1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8064d8f6a494afcb;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1e801ffc7fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffe003c1f0077;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff0074230438;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff0000000438;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000463fd2902d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5ccd54bbfcac806c;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000800800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000800800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000004000000000;
+  __m128i_out = __lsx_vmul_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff5fff4002ffff5;
+  *((unsigned long*)& __m128i_op1[1]) = 0xaa858644fb8b3d49;
+  *((unsigned long*)& __m128i_op1[0]) = 0x18499e2cee2cc251;
+  *((unsigned long*)& __m128i_result[1]) = 0x8644000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xaed495f03343a685;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_result[1]) = 0xb71289fdfbea3f69;
+  *((unsigned long*)& __m128i_result[0]) = 0x4e17c2ffb4851a40;
+  __m128i_out = __lsx_vmul_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfc01fcfefc02fdf7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fcfffe01fd01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfc01fd1300000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe00fd1400010000;
+  *((unsigned long*)& __m128i_result[1]) = 0xc72ef153fc02fdf7;
+  *((unsigned long*)& __m128i_result[0]) = 0xca31bf15fd010000;
+  __m128i_out = __lsx_vmul_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc000c000c000ff81;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5d5d5d5d5d5d5d5d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5d5d5d5d5d5d0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xa2a2a2a3a2a2a2a3;
+  *((unsigned long*)& __m128i_result[0]) = 0xc605c000aedd0000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xacc8c794af2caf01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa91e2048938c40f0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xeeb1e4f43c3763f3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff5a6fe3d7;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000021e79364;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000718ea657431b;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000006ca193ec;
+  *((unsigned long*)& __m128i_result[0]) = 0x00008e72b5b94cad;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffe000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x467f6080467d607f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007f008000ea007f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0042003e0042002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffc0001fffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffe0004fffe0004;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc1bdceee242070db;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe8c7b756d76aa478;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f433212dce09025;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf359f359f359f359;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf359f359f359f359;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x86dd8341b164f12b;
+  *((unsigned long*)& __m128i_result[0]) = 0x9611c3985b3159f5;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd27db010d20fbf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffd27db010d20fbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x9727b8499727b849;
+  *((unsigned long*)& __m128i_result[0]) = 0x12755900b653f081;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0303030303030303;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x02f3030303030303;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x06d9090909090909;
+  __m128i_out = __lsx_vmul_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff81ffff7f03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04ffff8101ff81ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000001e000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a000000f6000000;
+  __m128i_out = __lsx_vmul_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x317fce80317fce80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmul_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc0c00000c0c00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0c00c01c2cd0009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000011;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff7fffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0040000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000efffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000c5ac01015b;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaaacac88a3a9a96a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000005f0003e000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000408;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000003397dd140;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000004bd7cdd20;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0016ffb00016ffb0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0016ffb00016ffb0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000004a294b;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000006d04bc;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000015516a768038;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff9ed2e1c000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007ffe7ffe400000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007ffd0001400840;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x007ffd0001400840;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007ffd0001400840;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fffffff80000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003ffd000a4000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0032000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0032000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000009c400000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x059a35ef139a8e00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc0fffff000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffe00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0202fe02fd020102;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000202fe02;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ef400ad21fc7081;
+  *((unsigned long*)& __m128i_op0[0]) = 0x28bf0351ec69b5f2;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff6080ffff4417;
+  *((unsigned long*)& __m128i_result[1]) = 0x7ef3ddac21fc5a2c;
+  *((unsigned long*)& __m128i_result[0]) = 0x28bee9edec690869;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffe;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000006362ffff;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000038003;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000040033;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000080000068;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffe000ffdf;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbf3efff536d5169b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ebdfffffddf3f40;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f5ec0a0feefa0b0;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff0000ac26;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ffffff81fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff00ffff7e01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000fffe01fd02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000fe86;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[0]) = 0x4040404040404040;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffa800000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000157;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001001100110068;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1d8000001d800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1d8000001d800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1d8000001d800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1d8000001d800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0366000003660000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0366000003660000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fffffff3ffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fffffff3ffffffe;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff0101ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffffa0204000;
+  *((unsigned long*)& __m128i_result[1]) = 0x001f7fc100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x001f7fff00000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc5c534920000c4ed;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000cd630000cd63;
+  *((unsigned long*)& __m128i_op1[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffcd63ffffcd63;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffd765ffffd765;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001a64b345308091;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001f2f2cab1c732a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1baf8eabd26bc629;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1c2640b9a8e9fb49;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002dab8746acf8e;
+  *((unsigned long*)& __m128i_result[0]) = 0x00036dd1c5c15856;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff8000010f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fff80000;
+  __m128i_out = __lsx_vmuh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000214f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc31b63d846ebc810;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff0000800000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff941d;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000010a7;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000046ebaa2c;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00003a7fc58074ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000eeff1100e;
+  __m128i_out = __lsx_vmuh_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffe0002;
+  __m128i_out = __lsx_vmuh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000cf4f4f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000cf4f4f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0ff780a10efc01af;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fe7f0000;
+  __m128i_out = __lsx_vmuh_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmuh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f800000976801fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x837c1ae57f8012ed;
+  *((unsigned long*)& __m128i_result[1]) = 0x976801fd6897fe02;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8012ec807fed13;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000e0000000e0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000e0000000e0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000c400;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004e005500060031;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff870068fff5ffb3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fe01fe01;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fe01fe01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff020000fff4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1e801ffc7fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001ee100000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000feff01;
+  *((unsigned long*)& __m128i_result[0]) = 0x00feff0100000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000efffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000383;
+  *((unsigned long*)& __m128i_result[0]) = 0xe400000003ffc001;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000000fff8fff8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80000000fff80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80000000fff80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80000000fff80000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000004000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff8004000000000;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101010202050120;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010102020202;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001fffe00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0018001800180018;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0018001800180018;
+  *((unsigned long*)& __m128i_op1[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd83c8081ffff808f;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff489b693120950;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffc45a851c40c18;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x78c00000ff000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000b5207f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2000000020000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200200000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x6a57a30ff0000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000006;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfa31dfa21672e711;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1304db85e468073a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000006;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf51cf8dad6040188;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0982e2daf234ed87;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0ae3072529fbfe78;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0100010000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0100010000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff700000009;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000100010001fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffe5;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000100;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3e1f321529232736;
+  *((unsigned long*)& __m128i_op1[0]) = 0x161d0c373c200826;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe80ffffffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f3f018000000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001800390049ffaa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0029ff96005cff88;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffff88;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff80000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff8001ffff8001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003f8000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ff0010000000000;
+  __m128i_out = __lsx_vmulwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff000000007fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe50000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffe020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fc00000010a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001b0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80044def00000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007f8449a19084;
+  *((unsigned long*)& __m128i_result[0]) = 0x49a210000000ff00;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8001000180010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8001000184000800;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff80007e028401;
+  *((unsigned long*)& __m128i_result[0]) = 0x9a10144000400000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff81007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffb7005f0070007c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000007c;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000005f0003e000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000004a294b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000006d04bc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000bd003d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffff000f0008d3c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff0016fff8d3d;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffff000f0008d3c;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff0016fff8d3d;
+  *((unsigned long*)& __m128i_result[1]) = 0xe10000004deb2610;
+  *((unsigned long*)& __m128i_result[0]) = 0xe101e0014dec4089;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf0fd800080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000a00028004000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0202020202020203;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0202020202020203;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x41dfffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1111113111111141;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000111111312;
+  *((unsigned long*)& __m128i_result[0]) = 0x2222272111111410;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000150000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffeffff001effff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffff1a0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000f00f;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x111110ff11111141;
+  *((unsigned long*)& __m128i_op1[0]) = 0x11111131111116a6;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000016;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffff98dea;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000200020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc110000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc00d060000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf047ef0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff8607db959f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff0cff78ff96ff14;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000008a0000008a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000008900000009;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000043c5ea7b6;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000008fc4ef7b4;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffffdfffdf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffbfc0ffffbfc0;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x55aa55aa55aa55ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaa55555655aaaaa8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ef4002d21fc7001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x28bf02d1ec6a35b2;
+  *((unsigned long*)& __m128i_result[1]) = 0x2a7b7c9260f90ee2;
+  *((unsigned long*)& __m128i_result[0]) = 0x1b1c6cdfd57f5736;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000005a00000228;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff9ee000004ec;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffacdb6dbecac;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f5533a694f902c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1f54e0ab00000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2028000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff0100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff0100000001;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001c88bf0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001c88bf0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x14ccc6320076a4d2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x685670d27e00682a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x14ccc631eb3339ce;
+  *((unsigned long*)& __m128i_result[0]) = 0x685670d197a98f2e;
+  __m128i_out = __lsx_vmulwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffff800000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff0015172b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffff46;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff46000000ba;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000f80007;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000006c80031;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfd000000fb00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fe00f8000700;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xfdfef9ff0efff900;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0909090900000909;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0909090909090909;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c6fde000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fef01000f27ca;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000010000010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000ffef0010000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00e4880080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0080810080808100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffe0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000005452505;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000044525043c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fc03fc000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffc03fc040;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000a;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f5ec0a0feefa0b0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff02d060;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe000100cf005f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000004040504;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000004040504;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff011fb11181d8ea;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80ff800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00fe00fe000200fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00fe00fe000200fe;
+  *((unsigned long*)& __m128i_result[1]) = 0x00fd02fe00002302;
+  *((unsigned long*)& __m128i_result[0]) = 0x007ffd0200000000;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffd70b00006ea9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffa352ffff9269;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffd70b00006ea9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffa352ffff9269;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0001ffff0001;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffff7fffffff7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0f0f0f0f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f07697100000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8101010181010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8101010181010101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc0808000c0808000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7efefefe82010201;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x7afafaf88a050a05;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffffff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001001100110068;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01ff01ff01fc10;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000400028000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001f7fc100000404;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000002a000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff0101ffffe000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffffa0204000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffe1ffc100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000400000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff000ff6220c0c1;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffe8081000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000007ff000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0042003e0042002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffc0001fffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffbeffc2ffbeffd1;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010001;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcda585aebbb2836a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcda585aebbb2836a;
+  *((unsigned long*)& __m128i_result[1]) = 0xd78cfd70b5f65d76;
+  *((unsigned long*)& __m128i_result[0]) = 0x5779108fdedda7e4;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6a5d5b056f2f4978;
+  *((unsigned long*)& __m128i_op0[0]) = 0x17483c07141b5971;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xd4bade5e2e902836;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x345002920f3017d6;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67157b5100005000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x387c7e0a133f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000004870ba0;
+  __m128i_out = __lsx_vmulwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000004870ba0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x478b478b38031779;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6b769e690fa1e119;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fe98c2a0;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0xd48acbfe13102acf;
+  *((unsigned long*)& __m128i_result[0]) = 0xf4af70d0c4000000;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd400c02000002acf;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf4000020c4000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6453f5e01d6e5000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000fdec000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000700000004fdff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000300000000fdff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0006fff20003fff8;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002fffa00000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc5c534920000c4ed;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000009000900;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000009000900;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000600000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000c6c6c6c6;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c6c6c6c6;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000c6c7;
+  *((unsigned long*)& __m128i_result[0]) = 0x8d8d8d8d8d8cc6c6;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c0010000a186;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00067fff0002a207;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff0002;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_result[0]) = 0x05fafe0101fe000e;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x05fafe0101fe000e;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x4);
+  *((unsigned long*)& __m128i_op0[1]) = 0xe2560afe9c001a18;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe2560afe9c001a18;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x05fafe0101fe000e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000d82;
+  *((unsigned long*)& __m128i_result[0]) = 0x046a09ec009c0000;
+  __m128i_out = __lsx_vmulwod_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0004280808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010203030201000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000808080800;
+  __m128i_out = __lsx_vmulwev_d_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc1f03e1042208410;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00f0001000000010;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffc3;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00c0c000c0000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0000000c000c000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_op0[0]) = 0x59f7fd8759f7fd87;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_op1[0]) = 0x59f7fd8759f7fd87;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000021100000211;
+  *((unsigned long*)& __m128i_result[0]) = 0xfb141d31fb141d31;
+  __m128i_out = __lsx_vmulwev_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x030804010d090107;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1313131313131313;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1313131313131313;
+  *((unsigned long*)& __m128i_result[1]) = 0x0039d21e3229d4e8;
+  *((unsigned long*)& __m128i_result[0]) = 0x6d339b4f3b439885;
+  __m128i_out = __lsx_vmulwod_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf8f8372f752402ee;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_d_wu_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007770ffff9411;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007770ffff9411;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000100000001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x37b951002d81a921;
+  __m128i_out = __lsx_vmulwev_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffc0ffc0003f;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffc0ffc0003f003f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007770ffff941d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000ffff000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000077529b522400;
+  __m128i_out = __lsx_vmulwod_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000077af9450;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000047404f4f040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000214f;
+  *((unsigned long*)& __m128i_result[0]) = 0xc31b63d846ebc810;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007fff7fff8000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff9dff9dff9dff9d;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000003f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f7f00007f7f7500;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3b42017f3a7f7f01;
+  *((unsigned long*)& __m128i_result[1]) = 0x04faf60009f5f092;
+  *((unsigned long*)& __m128i_result[0]) = 0x04fafa9200000000;
+  __m128i_out = __lsx_vmulwod_q_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x440ef000440ef000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4400000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0f8d33000f8d3300;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003b80000000000;
+  __m128i_out = __lsx_vmulwod_w_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000056;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffff86;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000eefff;
+  *((unsigned long*)& __m128i_result[0]) = 0xf8e1a03affffe3e2;
+  __m128i_out = __lsx_vmulwev_q_du_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000eefff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf8e1a03affffe3e2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3a80613fda5dcb4a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x93f0b81a914c003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000051649b6;
+  *((unsigned long*)& __m128i_result[0]) = 0xd2f005e44bb43416;
+  __m128i_out = __lsx_vmulwev_h_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a80613fda5dcb4a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x93f0b81a914c003b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fd1654860000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1e242e4d68dc0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x2ff8fddb7ae20000;
+  __m128i_out = __lsx_vmulwev_d_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000002bf8b062000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffd0ba876d000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e880ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ff110db0;
+  *((unsigned long*)& __m128i_result[0]) = 0x41f7be08ffff578a;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x317fce80317fce80;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffe0001fffe0001;
+  __m128i_out = __lsx_vmulwod_w_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf2c97aaa7d8fa270;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0b73e427f7cfcb88;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_result[1]) = 0xf654ad7447e59090;
+  *((unsigned long*)& __m128i_result[0]) = 0x27b1b106b8145f50;
+  __m128i_out = __lsx_vmulwev_w_hu_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefe000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000155;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_h_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff100000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000f0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwod_q_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmulwev_h_bu_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffa486c90f;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1f52d710bf295626;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff7f01ff01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x78c00000ff000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff7f01ff01;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfa31dfa21672e711;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1304db85e468073a;
+  *((unsigned long*)& __m128i_op2[1]) = 0x887c8beb969e00f2;
+  *((unsigned long*)& __m128i_op2[0]) = 0x101f8b680b6f8095;
+  *((unsigned long*)& __m128i_result[1]) = 0x7582ed22cb1c6e12;
+  *((unsigned long*)& __m128i_result[0]) = 0x35aaa61c944f34c2;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m128i_result[0]) = 0x5252525252525252;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xc);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op2[1]) = 0xbfffbfffbfffbffe;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4000400040004002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe01fe01fe01fe01;
+  *((unsigned long*)& __m128i_op2[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op2[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0xf10cf508f904fd01;
+  *((unsigned long*)& __m128i_result[0]) = 0xf10cf508f904fd01;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffb080ffffb080;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffb080ffffb080;
+  *((unsigned long*)& __m128i_op2[1]) = 0x004fcfcfd01f9f9f;
+  *((unsigned long*)& __m128i_op2[0]) = 0x9f4fcfcfcf800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3504b5fd2dee1f80;
+  *((unsigned long*)& __m128i_result[0]) = 0x4676f70fc0000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf7f7f7ff8e8c6d7e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf7f7f7f7f7f7fbff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xf7f7f7ff8e8c6d7e;
+  *((unsigned long*)& __m128i_result[0]) = 0xf7f7f7f7f7f7fbff;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0fbc1df53c1ae3f9;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff820f81;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xf144e32bc4e61d27;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000020017ef19f;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000004b01;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000004b01;
+  __m128i_out = __lsx_vmadd_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffff0000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffefffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf001f0010101f002;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000007f41;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000000000001;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x01ff020000ff03ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01346b8d00b04c5a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x002affd600000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcbc2723a4f12a5f8;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x01ff020000ff03ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x01346b8d00b04c5a;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080808000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080808000;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000455555555;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007f00ff00ff00fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007f00ff00ff00fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xdcec560380000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x08ec7f7f80000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_op2[1]) = 0x32d8f0a905b6c59b;
+  *((unsigned long*)& __m128i_op2[0]) = 0x322a52fc2ba83b96;
+  *((unsigned long*)& __m128i_result[1]) = 0xaa14efac3bb62636;
+  *((unsigned long*)& __m128i_result[0]) = 0xd6c22c8353a80d2c;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmadd_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op2[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xff000000001f1f00;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00009c7c00007176;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00060fbf00040fbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020fbf00000fbf;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x9727b8499727b849;
+  *((unsigned long*)& __m128i_op2[0]) = 0x12755900b653f081;
+  *((unsigned long*)& __m128i_result[1]) = 0x00060fbf00040fbf;
+  *((unsigned long*)& __m128i_result[0]) = 0x00020fbf00000fbf;
+  __m128i_out = __lsx_vmadd_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000021100000211;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfb141d31fb141d31;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op2[1]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_op2[0]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_result[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_result[0]) = 0xbbc8ecc5f3ced5f3;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0674c886fcba4e98;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdce8003090b0906;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003fffc0ffc0003f;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffc0ffc0003f003f;
+  *((unsigned long*)& __m128i_op2[1]) = 0x002a05a2f059094a;
+  *((unsigned long*)& __m128i_op2[0]) = 0x05ad3ba576eae048;
+  *((unsigned long*)& __m128i_result[1]) = 0xd4a6cc27d02397ce;
+  *((unsigned long*)& __m128i_result[0]) = 0x24b85f887e903abe;
+  __m128i_out = __lsx_vmadd_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0700f8ff0700f8ff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000007020701;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000007010701;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f8000008680f1ff;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636463abdf17;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x6363636463abdf17;
+  *((unsigned long*)& __m128i_result[0]) = 0x41f8e08016161198;
+  __m128i_out = __lsx_vmadd_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x17c64aaef639f093;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op2[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0x10f881a20ffd02b0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff800000;
+  __m128i_out = __lsx_vmadd_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfeffffffffff0002;
+  *((unsigned long*)& __m128i_op2[1]) = 0x54beed87bc3f2be1;
+  *((unsigned long*)& __m128i_op2[0]) = 0x8024d8f6a494afcb;
+  *((unsigned long*)& __m128i_result[1]) = 0xa8beed87bc3f2be1;
+  *((unsigned long*)& __m128i_result[0]) = 0x0024d8f6a494006a;
+  __m128i_out = __lsx_vmsub_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001ffff0001ffff;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffffff0ffe04000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001fc0000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000200010;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x040004000400040d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x040004000400040d;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xb327b9363c99d32e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa1e7b475d925730f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000003f80b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ff800000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m128i_result[1]) = 0xb327b9363c992b2e;
+  *((unsigned long*)& __m128i_result[0]) = 0xa1e7b475d925730f;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000ff0000;
+  __m128i_out = __lsx_vmsub_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000004c7f4c7f;
+  *((unsigned long*)& __m128i_op2[0]) = 0xe0c0c0c0d1c7d1c6;
+  *((unsigned long*)& __m128i_result[1]) = 0x061006100613030c;
+  *((unsigned long*)& __m128i_result[0]) = 0x4d6814ef9c77ce46;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000f00;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000002bfd9461;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3727f00000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc7e01fcfe0000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3727112c00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x39201f7120000040;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xe5b9012c00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc7e01fcfe0000000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff0204;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000442900007b4c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000e22b0000efa4;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000442800007b50;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0204;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffefffffffe;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000029;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op2[0]) = 0x003dbe88077c78c1;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000029;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff0000007f800000;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0fff0fff0fff0fff;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000003f0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffc3ffff003e;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000003f0000ffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffc3ffff003e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000f07f0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff177fffff0fc;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffbfffefffc9510;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffbfffefffc9510;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0c0b0a090b0a0908;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a09080709080706;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfffbfffefffc9510;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffbfffefffc9510;
+  *((unsigned long*)& __m128i_result[1]) = 0x29c251319c3a5c90;
+  *((unsigned long*)& __m128i_result[0]) = 0x62fb9272df7da6b0;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8f8f8f8f8f8f8f8f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8f8f8f8f8f8f8f8f;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x800000007fffffff;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000000;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001400000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000053a4f452;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001400000000;
+  __m128i_out = __lsx_vmsub_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00680486ffffffda;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff913bfffffffd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00680486ffffffda;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff913bfffffffd;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_result[1]) = 0x00680486ffffffda;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff913bb9951901;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0021b761002c593c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x002584710016cc56;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000001e03;
+  *((unsigned long*)& __m128i_result[1]) = 0x0021b761002c593c;
+  *((unsigned long*)& __m128i_result[0]) = 0x002584710016ea59;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000290;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000290;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0002000400000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000500000001;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_op0[0]) = 0x59f7fd8759f7fd87;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffae001effae;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m128i_op2[0]) = 0x59f7fd8759f7fd87;
+  *((unsigned long*)& __m128i_result[1]) = 0xfd200ed2fd370775;
+  *((unsigned long*)& __m128i_result[0]) = 0x96198318780e32c5;
+  __m128i_out = __lsx_vmsub_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004000400040004;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfe3bfb01fe3bfe01;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfe03fe3ffe01fa21;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsub_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xc0c00000c0c00000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xc0c00c01c2cd0009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xbf80000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x1040400000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0961000100000001;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaddwev_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff7fffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff8000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffff7fffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffff8000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000003fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ff8010000000001;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op2[0]) = 0x001000100010c410;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00003fe00ffe3fe0;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff02ff1bff02ff23;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffffff02fff4;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff02ff1bff02ff23;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffff02fff4;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1e801ffc7fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7e44bde9b842ff23;
+  *((unsigned long*)& __m128i_result[0]) = 0x00011e80007edff8;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fc0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1e801ffc00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff020000fff4;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fc0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1e801ffc00000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x01fc020000fe0100;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01fc020000fe0100;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000017fda829;
+  __m128i_out = __lsx_vmaddwod_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000017fda829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000036280001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x42a0000042a02001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000036280001;
+  *((unsigned long*)& __m128i_result[0]) = 0x42a0000042a02001;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffeffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff8000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000800000000ffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op2[0]) = 0xd705c77a7025c899;
+  *((unsigned long*)& __m128i_result[1]) = 0xffcb410000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffeb827ffffffff;
+  __m128i_out = __lsx_vmaddwod_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000fff08;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000fff09;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80ff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff80000000ffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff80ff0010ff06;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007f01000eff0a;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007f7f80807f7f80;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f78787f00f7f700;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000f7f700f7f700;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000004000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfff8004000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfefd7f7e7f7f7f7f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9d519ee8d2d84f1d;
+  *((unsigned long*)& __m128i_op2[1]) = 0x8644ffff0000ffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffff0000fffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_result[0]) = 0xd83c8081ffff8080;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe0d56a9774f3ea31;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe0dd268932a5edf9;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe0d56a9774f3ea31;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe0dd268932a5edf9;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xd8248069ffe78077;
+  *((unsigned long*)& __m128i_result[1]) = 0xe0d56a9774f3ea31;
+  *((unsigned long*)& __m128i_result[0]) = 0xbddaa86803e33c2a;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd8248069ffe78077;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xd8248069ffe78077;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xe31c86e90cda86f7;
+  __m128i_out = __lsx_vmaddwod_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7da9b23a624082fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x03574e39e496cbc9;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001010000;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0028280000282800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7505853d654185f5;
+  *((unsigned long*)& __m128i_op2[0]) = 0x01010000fefe0101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x012927ffff272800;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0006000200000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7505445465593af1;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0006000200000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff00000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7e00fe0000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001fc0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000040004000100;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001fc0000;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffffc00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffffc00;
+  __m128i_out = __lsx_vmaddwod_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000ff00fe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000fe00ff;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3c600000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x78c00000ff000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x78c00000ff000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x78c00000ff000000;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000b5207f80;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000b5207f80;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000400;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000000000040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000fff3;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000000000040d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010400;
+  __m128i_out = __lsx_vmaddwev_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000c5ac01015b;
+  *((unsigned long*)& __m128i_op1[0]) = 0xaaacac88a3a9a96a;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000f;
+  __m128i_out = __lsx_vmaddwod_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000020302030;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000020302030;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff946c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffff946b;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff3c992b2e;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff730f;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000ffff946c;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffff946b;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff946c;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffdffff946c;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffff7f00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff007f0101017f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000020000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000183fffffe5;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000073;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000000000002a;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff7f00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff007f0101017f;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000401000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080000000000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffff800;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff000000ff0000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000007070700;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000002010202;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000007070700;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000002010202;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c2bac2c2;
+  __m128i_out = __lsx_vmaddwod_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000001ff000001ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000001ff000001ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000001ff000001ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000001ff000001ff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xff80ffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7ffffffeffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000002fe800000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ffffe0100000000;
+  __m128i_out = __lsx_vmaddwev_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe80000000000001;
+  __m128i_out = __lsx_vmaddwev_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff82bb9784;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffc6bb97ac;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7fffffff82bb9784;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fffffffc6bb97ac;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff82bb9784;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffc6bb97ac;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe80000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe80000000000001;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000030000003f;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x3fffffffc0000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0xbfffbfffbfffbffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xbfffbfffbfffbffe;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbfffbfffbfffbffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff208fffffa02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffff208fffffa02;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe218ffffea10;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff208fffffa02;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffd3000000130000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd3000000130000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffd3000000130000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffd3000000130000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffd3000000130000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffd3000000130000;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op2[1]) = 0x001ffff0003ffff0;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000fffefffefffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x8009700478185812;
+  *((unsigned long*)& __m128i_result[0]) = 0xe009f00ee7fb0800;
+  __m128i_out = __lsx_vmaddwod_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_op2[1]) = 0x3f8000003f800001;
+  *((unsigned long*)& __m128i_op2[0]) = 0x3f8000003f800001;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f8000003f800000;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000780000007800;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0007000000040000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000000010000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000080003f80ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op2[1]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000080003f80ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000bd3d00000000;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000c7fff000c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfffff000f0008d3c;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffff0016fff8d3d;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000100f8100002;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff0ff8006f0f950;
+  __m128i_out = __lsx_vmaddwod_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000095896a760000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x006f0efe258ca851;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffff7fc8ffff8000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffff200000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000015516a768038;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff9ed2e1c000;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000036de0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000003be14000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000000ffff7a53;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000001f0000;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001f0a;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000007b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffbffffff85;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffc0000fdfc;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffffff03ffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00013fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000088500000f6a0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fffd00000407;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000442900007b4c;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000e22b0000efa4;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffffff03ffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00013fff;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x007ffd0001400840;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007ffd0001400840;
+  __m128i_out = __lsx_vmaddwod_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xff81ff82ff810081;
+  *((unsigned long*)& __m128i_op2[0]) = 0xff82ff810081ff81;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111113111111131;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111131;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffff0008;
+  *((unsigned long*)& __m128i_result[1]) = 0x1111113111111141;
+  *((unsigned long*)& __m128i_result[0]) = 0x1111113111111121;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3f77aab500000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffc100010001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x3f77aab500000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffc100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0fbc1df53c1ae3f9;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ff820f81;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00010020fffeffde;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100400100200e68;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00010020fffeffde;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0100400100200e68;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1ff85ffe2ae5d973;
+  *((unsigned long*)& __m128i_result[1]) = 0x00010020fffeffde;
+  *((unsigned long*)& __m128i_result[0]) = 0x011f57c100201a46;
+  __m128i_out = __lsx_vmaddwod_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000ffc2f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00201df000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffc2ffe700000007;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffc100010001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00010020fffeffde;
+  *((unsigned long*)& __m128i_op2[0]) = 0x011f57c100201a46;
+  *((unsigned long*)& __m128i_result[1]) = 0x001ffce00016fb41;
+  *((unsigned long*)& __m128i_result[0]) = 0x57cb857100001a46;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0032000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op2[0]) = 0x2020202020207f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x05d0ba0002e8802e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd005e802174023d6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc000c000c000ff81;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0ba00ba00ba00ba0;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0ba00ba00ba011eb;
+  *((unsigned long*)& __m128i_result[1]) = 0x05d0ae6002e8748e;
+  *((unsigned long*)& __m128i_result[0]) = 0xcd1de80217374041;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000cdc1;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe93d0bd19ff0c170;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5237c1bac9eadf55;
+  *((unsigned long*)& __m128i_op2[1]) = 0x05d0ae6002e8748e;
+  *((unsigned long*)& __m128i_op2[0]) = 0xcd1de80217374041;
+  *((unsigned long*)& __m128i_result[1]) = 0xf490ee600180ce20;
+  *((unsigned long*)& __m128i_result[0]) = 0x063bff74fb46e356;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100013fa0;
+  __m128i_out = __lsx_vmaddwev_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001021;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2000200000013fa0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000013fa0;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100013fa0;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf047ef0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3941248880000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3941248880000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x76f4248880000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003dbe88077c78c1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_result[0]) = 0x003dc288077c7cc1;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000021;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op2[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op2[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0fff0fff0fff0fff;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002ffff0000ffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff7f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002fffefffd0001;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1000100012030e02;
+  *((unsigned long*)& __m128i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefefefefefefefe;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ef4002d21fc7001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x28bf02d1ec6a35b2;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff6080ffff4417;
+  *((unsigned long*)& __m128i_op2[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xff8000007fc00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7ef400ad21fc7081;
+  *((unsigned long*)& __m128i_result[0]) = 0x28bf0351ec69b5f2;
+  __m128i_out = __lsx_vmaddwod_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[0]) = 0x6363636363636363;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000002000;
+  __m128i_out = __lsx_vmaddwod_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000006e17bfd8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000006e17bfd8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffff0100000001;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffff0100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000006e17bfd8;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000006e17bfd8;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000008130c7f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1f1f1f1f1f1f1f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f1f1f27332b9f00;
+  *((unsigned long*)& __m128i_op2[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op2[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[1]) = 0x06b1213ef1efa299;
+  *((unsigned long*)& __m128i_result[0]) = 0x8312f5424ca4a07f;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa23214697fd03f7f;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x14ccc6320176a4d2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x685670d37e80682a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x14ccc6320176a4d2;
+  *((unsigned long*)& __m128i_result[0]) = 0x685670d37e80682a;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x00010000fffffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffb00151727;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff0015172b;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00010000fffffffc;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffb00151727;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000f02e1f80f04;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000f02e1f80f04;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80800001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff80800001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffff7fff7ef;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080ffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffbff8888080a;
+  *((unsigned long*)& __m128i_result[0]) = 0x080803ff807ff7f9;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x010105017878f8f6;
+  *((unsigned long*)& __m128i_op2[0]) = 0xf8f8fd0180810907;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000080800000808;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffe000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c6fde000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xe000e0006080b040;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffe000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c6fde000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ff00fe00ff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000fff00000e36;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000fef01000e27ca;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000e36400005253;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000035ed0000e000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000008000e2e3;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000008000e2e3;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080806362;
+  *((unsigned long*)& __m128i_result[0]) = 0x807f808000000000;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000ff801c9e;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000810000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0400400204004002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001200100012001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fc03fc000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x3fc03fc000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f801fe000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc0411fe800000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x601fbfbeffffffff;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe00029f9f6061;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f5ec0a0feefa0b0;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffe00029fb060b1;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01fe01fd01fd01fd;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x5d7f5d007f6a007f;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fff7fff7fff7f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbafebb00ffd500fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000500000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0c0b0a090b0a0908;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a09080709080706;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaddwod_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000002b0995850;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007f800000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_op2[0]) = 0x007f800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff00011cf0c569;
+  *((unsigned long*)& __m128i_result[0]) = 0xc0000002b0995850;
+  __m128i_out = __lsx_vmaddwev_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080006b00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001b19b1c9c6da5a;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x001b19b1c9c6da5a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x008003496dea0c61;
+  __m128i_out = __lsx_vmaddwod_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ffffff81fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff00ffff7e01;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000fffe01fd02;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe00fffe86f901;
+  __m128i_out = __lsx_vmaddwev_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op2[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001808281820102;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001808201018081;
+  __m128i_out = __lsx_vmaddwev_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffa;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8101010181010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8101010181010101;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000010100fe0101;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffff0200ffff01ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffc0000000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000455555555;
+  *((unsigned long*)& __m128i_result[1]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffc0000000000004;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000158;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m128i_op2[0]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffc0800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffc0800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffc0800000;
+  __m128i_out = __lsx_vmaddwod_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000700000004e000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000000012020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000e00a18f5;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000002023dcdc;
+  *((unsigned long*)& __m128i_result[1]) = 0x000700000004e000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003000000012020;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000101010015;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffed00010001;
+  __m128i_out = __lsx_vmaddwev_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000014;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000014;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000053a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000700000014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffbffda;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000005003a;
+  *((unsigned long*)& __m128i_op2[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080000700000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffbffda;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x680485c8b304b019;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc89d7f0fed582019;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_op2[1]) = 0x67157b5100005000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x387c7e0a133f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x680485c8b304b019;
+  *((unsigned long*)& __m128i_result[0]) = 0xc89d7f0ff90da019;
+  __m128i_out = __lsx_vmaddwev_w_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffac0a000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000200000001b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffac0a000000;
+  __m128i_out = __lsx_vmaddwod_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x98147a504d145000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x377b810912c0e000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x5a57bacbd7e39680;
+  *((unsigned long*)& __m128i_op2[0]) = 0x6bae051ffed76001;
+  *((unsigned long*)& __m128i_result[1]) = 0xf3eb458161080000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffe9454286c0e000;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7c7c9c0000007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7c7c9c0000007176;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7c7c9c0000007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00ff000000001f1f;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7c7c9c0000007176;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc5c53492f25acbf2;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff000000001f1f00;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_result[0]) = 0xc5c53492f25acbf2;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op1[0]) = 0x39c51f389c0d6112;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffff0001ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ff9b0082;
+  *((unsigned long*)& __m128i_result[0]) = 0x003a0037fff2fff8;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x05fafe0101fe000e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x05fafe0101fe000e;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_result[0]) = 0x05fafe0101fe000e;
+  __m128i_out = __lsx_vmaddwev_h_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000024170000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000042ab41;
+  *((unsigned long*)& __m128i_result[0]) = 0xb1b1b1b1b16f0670;
+  __m128i_out = __lsx_vmaddwod_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000042ab41;
+  *((unsigned long*)& __m128i_op0[0]) = 0xb1b1b1b1b16f0670;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000042ab41;
+  *((unsigned long*)& __m128i_result[0]) = 0xb1b1b1b1b16f0670;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x841f000fc28f801f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x841f000fc28f801f;
+  *((unsigned long*)& __m128i_op2[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xe593c8c4e593c8c4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x76ecfc8b85ac78db;
+  __m128i_out = __lsx_vmaddwev_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x400000003fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4000000040000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x400000003fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x4000000040000000;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x328e1080889415a0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3960b1a401811060;
+  *((unsigned long*)& __m128i_op1[1]) = 0x328e1080889415a0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3960b1a401811060;
+  *((unsigned long*)& __m128i_op2[1]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x32f3c7a38f9f4b8b;
+  *((unsigned long*)& __m128i_result[0]) = 0x2c9e5069f5d57780;
+  __m128i_out = __lsx_vmaddwod_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x022002101b200203;
+  *((unsigned long*)& __m128i_op0[0]) = 0x022002101b200203;
+  *((unsigned long*)& __m128i_op1[1]) = 0x022002101b200203;
+  *((unsigned long*)& __m128i_op1[0]) = 0x022002101b200203;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000080c43b700;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x036caeeca7592703;
+  *((unsigned long*)& __m128i_result[0]) = 0x022002101b200203;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_op1[0]) = 0x030298a6a1030a49;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0051005200510052;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0051005200510052;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffaeffaeffaeffae;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffaeffaeffaeffae;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe65ecc1be5bc;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe65ecc1be5bc;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffc105d1aa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffbc19ecca;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff3efa;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff43e6;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x34947b4b11684f92;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd73691661e5b68b4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000016f303dff6d2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000016f303dff6d2;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7fffffff00000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x34947b4b11684f92;
+  *((unsigned long*)& __m128i_result[0]) = 0xee297a731e5c5f86;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op2[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op2[0]) = 0xc3818bffe7b7a7b8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000868686868686;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0674c8868a74fc80;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfdce8003090b0906;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0674c8868a74fc80;
+  *((unsigned long*)& __m128i_op2[0]) = 0xfdce8003090b0906;
+  *((unsigned long*)& __m128i_result[1]) = 0x0029aeaca57d74e6;
+  *((unsigned long*)& __m128i_result[0]) = 0xdbe332365392c686;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf1f1f1f1865e65a1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff941d;
+  *((unsigned long*)& __m128i_op2[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_op2[0]) = 0xf1f1f1f1865e65a1;
+  *((unsigned long*)& __m128i_result[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_result[0]) = 0x78508ad4ec2ffcde;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_op0[0]) = 0x78508ad4ec2ffcde;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffdfdc0d;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000000ffdfdc0d;
+  *((unsigned long*)& __m128i_result[1]) = 0xf1f1f1f149ed7273;
+  *((unsigned long*)& __m128i_result[0]) = 0x78508ad4ae70fd87;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a753500950fa306;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0a753500950fa306;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a753500a9fa0d06;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000467fe000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000003ff8;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000003ff8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000467fef81;
+  __m128i_out = __lsx_vmaddwod_h_bu_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwev_q_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000440efffff000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000003b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x440ef000440ef000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x4400000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000440efffff000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000003b;
+  __m128i_out = __lsx_vmaddwev_h_bu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000002bf8b062000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffd0ba876d000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe363636363abdf16;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0005840100000005;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0005847b00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0004e8f09e99b528;
+  *((unsigned long*)& __m128i_result[0]) = 0xcf1225129ad22b6e;
+  __m128i_out = __lsx_vmaddwod_q_du(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffa7;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00c2758000bccf42;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00a975be00accf03;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000930400008a10;
+  *((unsigned long*)& __m128i_result[0]) = 0x00006f9100007337;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_q_du_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmaddwod_w_hu_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f4f4f4f4f4f4f4f;
+  __m128i_out = __lsx_vmaddwod_d_wu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c83e21a22001818;
+  *((unsigned long*)& __m128i_op1[0]) = 0xdd3b8b02563b2d7b;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000009c83e21a;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000022001818;
+  *((unsigned long*)& __m128i_result[1]) = 0xf2c97aaa7d8fa270;
+  *((unsigned long*)& __m128i_result[0]) = 0x0b73e427f7cfcb88;
+  __m128i_out = __lsx_vmaddwev_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000e0000000e;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000cfffffff2;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000dfffffff1;
+  __m128i_out = __lsx_vmaddwev_d_wu_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000011ffee;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000dfff2;
+  __m128i_out = __lsx_vmaddwod_d_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0177fff0fffffff0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000011ff8bc;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00bbfff7fffffff7;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff008ff820;
+  *((unsigned long*)& __m128i_result[1]) = 0xffe8008fffe7008f;
+  *((unsigned long*)& __m128i_result[0]) = 0x00010001f1153780;
+  __m128i_out = __lsx_vmaddwev_w_hu(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000001;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003ff000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fffc00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00001ff800000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ffe800e80000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6a1a3fbb3c90260e;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xe6a0cf86a2fb5345;
+  *((unsigned long*)& __m128i_result[0]) = 0x95e5c045c36fd9f2;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x203e16d116de012b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000073;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000002a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ffffff00ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfebffefffebffeff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfebffefffebffeff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe3e3e3e3e3e3e3e3;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe3e3e3e3e3e3e3e3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe3e3e3e3e3e3e3e3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363797c63996399;
+  *((unsigned long*)& __m128i_op0[0]) = 0x171f0a1f6376441f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363797c63996399;
+  *((unsigned long*)& __m128i_op1[0]) = 0x171f0a1f6376441f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000408;
+  *((unsigned long*)& __m128i_op1[1]) = 0x80010001b57fc565;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8001000184000be0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000080001fffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000036de0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003be14000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000007e8a60;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000001edde;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe93d0bd19ff0c170;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5237c1bac9eadf55;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f3fa0000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xc110000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc00d060000000000;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffd700;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffffffdfffdf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000003f200001e01;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000014bf000019da;
+  *((unsigned long*)& __m128i_op1[1]) = 0x9c9c99aed5b88fcf;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7c3650c5f79a61a3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000015d926c7;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000e41b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x77c0404a4000403a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x77c03fd640003fc6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_op1[1]) = 0x31b1777777777776;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6eee282828282829;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003fffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000003dffc2;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001084314a6;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001084314a6;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000ffef0010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000010000010101;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0101000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4280000042800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xbd7fffffbd800000;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010100000101;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff9cf0d77b;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc1000082b0fb585b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffbfff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0080006b0000000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000001ff1745745c;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101000101010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000fe0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00ffffff00ff;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0effeffefdffa1e0;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe6004c5f64284224;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfeffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0042003e0042002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffc0001fffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0042003e0042002f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001fffc0001fffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101010100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010100000000;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080800008;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x33f5c2d7d975d7fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff010000ff01;
+  __m128i_out = __lsx_vdiv_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff9727ffff9727;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffe79ffffba5f;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x010169d9010169d9;
+  *((unsigned long*)& __m128i_result[0]) = 0x01010287010146a1;
+  __m128i_out = __lsx_vdiv_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_op1[0]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff14eb54ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0x14ea6a002a406a00;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff80008a7555aa;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0a7535006af05cf9;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vdiv_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000feff2356;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fd165486;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000246d9755;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000002427c2ee;
+  __m128i_out = __lsx_vdiv_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363abdf16;
+  *((unsigned long*)& __m128i_op1[0]) = 0x41f8e08016161198;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000030000;
+  __m128i_out = __lsx_vdiv_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000004ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000667ae56;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vdiv_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa2e3a36363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa2e3a36463636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f80000000000007;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000700000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000e32c50e;
+  *((unsigned long*)& __m128i_result[0]) = 0xf2b2ce330e32c50e;
+  __m128i_out = __lsx_vdiv_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vdiv_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f801fa06451ef11;
+  *((unsigned long*)& __m128i_op1[0]) = 0x68bcf93435ed25ed;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000022666621;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffdd9999da;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7f7f7f7f00107f04;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f0000fd7f0000fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000066621;
+  *((unsigned long*)& __m128i_result[0]) = 0x01ff00085e9900ab;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x16161616a16316b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x16161616a16316b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000101fd01fe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff80ff80ff80ff80;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff80ff8080008000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000101fd01fe;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x82c539ffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc72df14afbfafdf9;
+  *((unsigned long*)& __m128i_op1[1]) = 0x82c539ffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc72df14afbfafdf9;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5a5a5a5a5b5a5b5a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5a5a5a5a5b5a5b5a;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000001494b494a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001494b494a;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fffe0000fffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffe0000fffe;
+  __m128i_out = __lsx_vmod_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ffff000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03c0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x03c0038000000380;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff000000ff00;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0101080408040804;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0804080407040804;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000104000800;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000bd3d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xefffdffff0009d3d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000bd3d;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007fff0000;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff994cb09c;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffc3639d96;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffcafff8ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff2cfed4fea8ff44;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffeffff0035ff8f;
+  *((unsigned long*)& __m128i_result[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000a0;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1202120212021202;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1202120212021202;
+  *((unsigned long*)& __m128i_result[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100010001000;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000000307d0771;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0d8e36706ac02b9b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x80000000307d0771;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0d8e36706ac02b9b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfeca2eb9931;
+  *((unsigned long*)& __m128i_op1[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[0]) = 0x370bdfeca2eb9931;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0003c853c843c844;
+  *((unsigned long*)& __m128i_result[0]) = 0x0003c853c843c844;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0003c853c843c844;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003c853c843c844;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xd70b30c96ea9f4e8;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa352bfac9269e0aa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001808281820102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001808201018081;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001008281820102;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001008201010081;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffeb;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffeb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_bu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_wu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfcfcfcdcfcfcfcdc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000003ddc5dac;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000004870ba0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x478b478b38031779;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6b769e690fa1e119;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000004870ba0;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010240010202;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x805ffffe01001fe0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9a49e11102834d70;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8144ffff01c820a4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9b2ee1a4034b4e34;
+  *((unsigned long*)& __m128i_result[1]) = 0xff1affff01001fe0;
+  *((unsigned long*)& __m128i_result[0]) = 0xff1aff6d02834d70;
+  __m128i_out = __lsx_vmod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x413e276583869d79;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f7f017f9d8726d3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_op1[1]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2006454690d3de87;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001d001d001d001d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001d001d001d0000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001d001d001d001d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001d001d001d0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000011ffee;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000dfff2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf6548a1747e59090;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x02b010f881a281a2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200020002;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffefffff784;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f8000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001000010f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0177fff0fffffff0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000011ff8bc;
+  __m128i_out = __lsx_vmod_du(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff100000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000f000000000000;
+  __m128i_out = __lsx_vmod_hu(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x003f00000000003f;
+  *((unsigned long*)& __m128i_result[0]) = 0x003f000000000000;
+  __m128i_out = __lsx_vsat_hu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_b(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x20);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x04e00060ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04e00060ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x007fffffffffffff;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000017f0a82;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000003f;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff8383ffff7d0d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe000ffff1fff;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff1739ffff48aa;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff2896ffff5b88;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f3f17393f3f3f3f;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f3f283f3f3f3f3f;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff8f8da00;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff01018888;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00ffff00;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff7fc01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000f;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000000010001;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8006000080020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8004000080020000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffff8fffffff8;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff8fffffff8;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000001fc00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010100000000;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffcc000b000b000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000b000b010a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f7f000b000b000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000b000b010a000b;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bd3d00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000bd3d00000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000c000ffffc000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_d(__m128i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x25);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffffffffffff;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x3e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000007f8;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vsat_hu(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000068;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001f;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fffe00fffffe00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0038f000ff000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fffe00fffffe00;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf000000000000000;
+  __m128i_out = __lsx_vsat_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0007000000050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00003fff00003fff;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636389038903;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636389038903;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000001ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000001ffff;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsat_b(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_hu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_hu(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x003f0000003f0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x003f0000003f0000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x22);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffff81010102;
+  *((unsigned long*)& __m128i_result[1]) = 0x03ff0101fc010102;
+  *((unsigned long*)& __m128i_result[0]) = 0x03fffffffc010102;
+  __m128i_out = __lsx_vsat_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000001fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x34);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vsat_b(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffcd63ffffcd63;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffd765ffffd765;
+  *((unsigned long*)& __m128i_result[1]) = 0x1f1f1f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_result[0]) = 0x1f1f1f1f1f1f1f1f;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001a323b5430048c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x008f792cab1cb915;
+  *((unsigned long*)& __m128i_result[1]) = 0x001a323b00ffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x008f792c00ffffff;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000006de1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5f9ccf33cf600000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m128i_result[0]) = 0x0007000700070000;
+  __m128i_out = __lsx_vsat_hu(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000001fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000100;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd27db010d20fbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffff00000000f;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0674c886fcba4e98;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdce8003090b0906;
+  *((unsigned long*)& __m128i_result[1]) = 0x003fffc0ffc0003f;
+  *((unsigned long*)& __m128i_result[0]) = 0xffc0ffc0003f003f;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000003ff8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003ff8;
+  __m128i_out = __lsx_vsat_w(__m128i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_h(__m128i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsat_wu(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xa8a74bff9e9e0070;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9e9e72ff9e9ff9ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffffffffff;
+  __m128i_out = __lsx_vsat_du(__m128i_op0,0x2f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vsat_bu(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000f909;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000001;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0028280000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x012927ffff272800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0028280000000000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000b5207f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fff7fc01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vexth_du_wu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_du_wu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000082020201;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000820200000201;
+  __m128i_out = __lsx_vexth_wu_hu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffe;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x004f0080004f0080;
+  *((unsigned long*)& __m128i_result[0]) = 0x004f0080004f0080;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00003ff000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000001fc00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000000020000;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03c0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03c0038000000380;
+  *((unsigned long*)& __m128i_result[1]) = 0x000003c000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff0000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff000000ff;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5c9c9c9ce3636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x63635c9e63692363;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000005c9c9c9c;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffe3636363;
+  __m128i_out = __lsx_vexth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xb9fe3640e4eb1b18;
+  *((unsigned long*)& __m128i_op0[0]) = 0x800000005b4b1b18;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffb9fe00003640;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffe4eb00001b18;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff007f00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff0000007f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000005;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfec00130014;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfec00130014;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000370bffffdfec;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001300000014;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe500c085c000c005;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe5c1a185c48004c5;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffe500ffffc085;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffc000ffffc005;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_du_wu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000020000020;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000fff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000b4a00008808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080800000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000b4a00008808;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_du_wu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x63b2ac27aa076aeb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000063b2ac27;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffaa076aeb;
+  __m128i_out = __lsx_vexth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffc;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x002cffacffacffab;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000007f00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vexth_hu_bu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000080;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1010111105050000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4040000041410101;
+  *((unsigned long*)& __m128i_result[1]) = 0x0010001000110011;
+  *((unsigned long*)& __m128i_result[0]) = 0x0005000500000000;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x002a001a001a000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000002a001a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000001a000b;
+  __m128i_out = __lsx_vexth_d_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000400080003fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bc2000007e10;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000400080003fff;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_w_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3a8000003a800000;
+  __m128i_out = __lsx_vexth_q_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000003ffe2;
+  __m128i_out = __lsx_vexth_h_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fec20704;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000012;
+  __m128i_out = __lsx_vexth_wu_hu(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vexth_qu_du(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00003f803f800100;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52527d7d52527d7d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x870968c1f56bb3cd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xf000e001bf84df83;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff8e001ff84e703;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ca354688;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff35cab978;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6a57a30ff0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe00fe00fe00fd01;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe00fffefe0100f6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffff0000010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0100010000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0100010000010000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000020000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000183fffffe5;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000400000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000400000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff3d06ffff4506;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ffffffe7ffff800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff6fff6fff6fff6;
+  __m128i_out = __lsx_vsigncov_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fffff0000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f80000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f80000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52525252525252cb;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52525252525252cb;
+  *((unsigned long*)& __m128i_result[1]) = 0xaeaeaeaeaeaeae35;
+  *((unsigned long*)& __m128i_result[0]) = 0xaeaeaeaeaeaeae35;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op1[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[1]) = 0x370bdfec00130014;
+  *((unsigned long*)& __m128i_result[0]) = 0x370bdfec00130014;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002020002020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x021f3b0205150600;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000300400002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000100010040fffb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000300400002;
+  *((unsigned long*)& __m128i_result[0]) = 0x000100010040fffb;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff801c9e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000810000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x008003496dea0c61;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0101000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101030100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000400000004;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1ab6021f72496458;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7750af4954c29940;
+  *((unsigned long*)& __m128i_result[1]) = 0xe64afee18eb79ca8;
+  *((unsigned long*)& __m128i_result[0]) = 0x89b051b7ac3e67c0;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x441ba9fcffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x181b2541ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fffffff7ffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff010181010102;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff81010102;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000045340a6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000028404044;
+  *((unsigned long*)& __m128i_op1[1]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003f0000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_result[1]) = 0x67eb85af0000b000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc8847ef6ed3f2000;
+  __m128i_out = __lsx_vsigncov_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000103;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffffc;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000034;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x003ffffe00800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x004001be00dc008e;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff0100010001;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff9fffefff9ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x04faf60009f5f092;
+  *((unsigned long*)& __m128i_op1[0]) = 0x04fafa9200000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfc06066e00000000;
+  __m128i_out = __lsx_vsigncov_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffe0002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000667ae56;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000667ae56;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000020;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000100020002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000100020002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000100020002;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffe1ffc0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffe1ffc0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010012;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffe1ffc0;
+  __m128i_out = __lsx_vsigncov_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x85bd6b0e94d89998;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd83c8081ffff8080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000f;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7505443065413aed;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0100d6effefd0498;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000013d;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000f0000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000002;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000100010001fffd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000001007c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1111113111111141;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1111113111111121;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9780697084f07dd7;
+  *((unsigned long*)& __m128i_op0[0]) = 0x87e3285243051cf3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000cdc1;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x05d0ae6002e8748e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcd1de80217374041;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000065a0;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00d3012acc56f9bb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000a0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000004b01;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e1d1c1b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000f;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ff08ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x370bdfecffecffec;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003f3f;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000022;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000008080600;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0018;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000004;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_w(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000035697d4e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000013ecaadf2;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000006de1;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5f9ccf33cf600000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003ffffe00800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000034;
+  __m128i_out = __lsx_vmskltz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4399d3221a29d3f2;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc3818bffe7b7a7b8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x63636b6afe486741;
+  *((unsigned long*)& __m128i_op0[0]) = 0x41f8e880ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000027;
+  __m128i_out = __lsx_vmskltz_h(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskltz_d(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0403cfcf01c1595e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x837cd5db43fc55d4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000cb4a;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff7f01ff01;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000d;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffe080f6efc100f7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xefd32176ffe100f7;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe813f00fe813f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000033;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fffe00006aea;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffce;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmskgez_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000210011084;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001e1f;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c9c9c9c9c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9c9c9c9c63636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x009500b10113009c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x009500b10113009c;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000005d5d;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000fe;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000fffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000007f41;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0014001400140000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000554;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x202544f490f2de35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x202544f490f2de35;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000a74aa8a55ab;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6adeb5dfcb000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003ff8;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x317fce80317fce80;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ff00;
+  __m128i_out = __lsx_vmsknz_b(__m128i_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00a300a300a300a3;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a300a300a300a3;
+  __m128i_out = __lsx_vldi(1187);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffe15;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffe15;
+  __m128i_out = __lsx_vldi(3605);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0xecececececececec;
+  *((unsigned long*)& __m128i_result[0]) = 0xecececececececec;
+  __m128i_out = __lsx_vldi(1004);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff00ff00ff00;
+  __m128i_out = __lsx_vldi(-1686);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_result[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_result[0]) = 0x004d004d004d004d;
+  __m128i_out = __lsx_vldi(1101);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0000000a000000;
+  __m128i_out = __lsx_vldi(-3318);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff00ff00ff00;
+  __m128i_out = __lsx_vldi(-1686);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0000000a000000;
+  __m128i_out = __lsx_vldi(-3318);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-mem.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-mem.c
new file mode 100644
index 00000000000..49c785a6d04
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-mem.c
@@ -0,0 +1,537 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00a300a300a300a3;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a300a300a300a3;
+  __m128i_out = __lsx_vldi(1187);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffe15;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffe15;
+  __m128i_out = __lsx_vldi(3605);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0xecececececececec;
+  *((unsigned long*)& __m128i_result[0]) = 0xecececececececec;
+  __m128i_out = __lsx_vldi(1004);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff00ff00ff00;
+  __m128i_out = __lsx_vldi(-1686);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_result[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_result[0]) = 0x004d004d004d004d;
+  __m128i_out = __lsx_vldi(1101);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0000000a000000;
+  __m128i_out = __lsx_vldi(-3318);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff00ff00ff00;
+  __m128i_out = __lsx_vldi(-1686);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0000000a000000;
+  __m128i_out = __lsx_vldi(-3318);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m128i_op0[0]) = 0xec68e3ef5a98ed54;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001fc0000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400040004000400;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffe03;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffe03;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000006;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005555aaabfffe;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff7fff;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f0101070101010f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000127f010116;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000002bfd9461;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xefffdffff0009d3d;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000010000c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x006ffffefff0000d;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000ffc2f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00201df000000000;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ca0200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ca0200000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff082f000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0202fe02fd020102;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a0aa9890a0ac5f3;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6368d2cd63636363;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808081;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fffe00fffffe00;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000008000008080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080800000800080;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000002e8b164;
+  *((unsigned long*)& __m128i_op0[0]) = 0x199714a038478040;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe00029f9f6061;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x007f008000ea007f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x687a8373f249bc44;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7861145d9241a14a;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0018;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010001000100010;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffffffe;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000004870ba0;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe0001fffe;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcd636363cd636363;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe593c8c4e593c8c4;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff9727ffff9727;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffe79ffffba5f;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000021;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080801030000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080103040000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000011ffee;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000dfff2;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xf784000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffff784;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xf784000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffff784;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff009ff83f;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vld((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldx((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xc3c3c3c3c3c3c3c3;
+  *((unsigned long*)& __m128i_result[0]) = 0xc3c3c3c3c3c3c3c3;
+  __m128i_out = __lsx_vldrepl_b((unsigned long *)&__m128i_op0, 0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xc31ac31ac31ac31a;
+  *((unsigned long*)& __m128i_result[0]) = 0xc31ac31ac31ac31a;
+  __m128i_out = __lsx_vldrepl_h((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x47a5c31a47a5c31a;
+  *((unsigned long*)& __m128i_result[0]) = 0x47a5c31a47a5c31a;
+  __m128i_out = __lsx_vldrepl_w((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldrepl_d((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0;
+  __lsx_vst(__m128i_op0, (unsigned long *)&__m128i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_op0, __m128i_result);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0;
+  __lsx_vstx(__m128i_op0, (unsigned long *)&__m128i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_op0, __m128i_result);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_b(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x5c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_h(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0xc9d85c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_w(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_d(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldx((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0;
+  __lsx_vstx(__m128i_op0, (unsigned long *)&__m128i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_op0, __m128i_result);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xc3c3c3c3c3c3c3c3;
+  *((unsigned long*)& __m128i_result[0]) = 0xc3c3c3c3c3c3c3c3;
+  __m128i_out = __lsx_vldrepl_b((unsigned long *)&__m128i_op0, 0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xc31ac31ac31ac31a;
+  *((unsigned long*)& __m128i_result[0]) = 0xc31ac31ac31ac31a;
+  __m128i_out = __lsx_vldrepl_h((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x47a5c31a47a5c31a;
+  *((unsigned long*)& __m128i_result[0]) = 0x47a5c31a47a5c31a;
+  __m128i_out = __lsx_vldrepl_w((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldrepl_d((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_b(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x5c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_h(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0xc9d85c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_w(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_d(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-perm.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-perm.c
new file mode 100644
index 00000000000..9b5fd3616ae
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-perm.c
@@ -0,0 +1,5555 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000007942652524;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4265252400000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_op1 = 0x0000007942652524;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff2524ffffffff;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000210011084;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000017fff9000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000200000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  long_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vinsgr2vr_d(__m128i_op0,long_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080000000000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5d5d5d5d5d5d5d55;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x5d5d5d005d5d5d55;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003fffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000000080000000;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_result = 0x00000000ffffffff;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020202020;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_result[0]) = 0x202020202020ff20;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fe01fc0005fff4;
+  int_op1 = 0x0000000020202020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000820202020;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe01fc0005fff4;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffbfffffffbf;
+  long_op1 = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffbfffffffbf;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003a24;
+  __m128i_out = __lsx_vinsgr2vr_d(__m128i_op0,long_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ef8000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ef8000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ef8000000000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  long_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_d(__m128i_op0,long_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001000;
+  int_op1 = 0x000000007ff00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000001000;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000020006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000060000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000020006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000600;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000003;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000001f1f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff000000001f1f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_op1 = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000040;
+  __m128i_out = __lsx_vinsgr2vr_d(__m128i_op0,long_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ffffff0000;
+  __m128i_out = __lsx_vinsgr2vr_w(__m128i_op0,int_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff000000000000;
+  __m128i_out = __lsx_vinsgr2vr_h(__m128i_op0,int_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x04faf60009f5f092;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04fafa9200000000;
+  int_op1 = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x04faf600fff5f092;
+  *((unsigned long*)& __m128i_result[0]) = 0x04fafa9200000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vinsgr2vr_b(__m128i_op0,int_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x4);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01fc020000fe0100;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000463fd2902d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5ccd54bbfcac806c;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd705c77a7025c899;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x2700000000002727;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400040004000400;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007d3ac600;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080001300000013;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dffbfff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0200400000000001;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003fffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001ffffffff;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffe5;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00a6ffceffb60052;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xa);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff0cffffff18;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((int*)& __m128_op0[3]) = 0x00000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xc);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_result = 0x00000000ffffffff;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0xbfffbfffbfffbffe;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128d_op0[1]) = 0x7fff7fff7fff7fff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8006000080020000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0bd80bd80bd80bd8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0bd80bd80bd80bd8;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000000b57ec564;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x8);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000c7fff000c;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xb);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000bd3d;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000100c6ffef10c;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffff01;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207f7f;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003dbe88077c78c1;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x9);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe500ffffc085;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xb);
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000200000002000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffe080f6efc100f7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xefd32176ffe100f7;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80008000ec82ab51;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000800089e08000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x8);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x1);
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6453f5e01d6e5000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fdec000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x801dd5cb0004e058;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xe);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39c51f389c0d6112;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x05fafe0101fe000e;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x4);
+  *((unsigned long*)& __m128i_op0[1]) = 0xe2560afe9c001a18;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000600007fff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x021b7d2449678a35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030298a621030a49;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x4);
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000abba7980;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_result[1]) = 0x004d004d004d004d;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0674c8868a74fc80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdce8003090b0906;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000008686;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c6c60000c6c6;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xc);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000056000056;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f00004f4f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f00004f4f0000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f4f4f;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000e0000000e;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffffffff;
+
+  int_op0 = 0x0000000059815d00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0400040004000400;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  long_op0 = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000400;
+  __m128i_out = __lsx_vreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  long_op0 = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f8000003f800000;
+  __m128i_out = __lsx_vreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000020202020;
+  *((unsigned long*)& __m128i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_result[0]) = 0x2020202020202020;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff000000ff;
+  __m128i_out = __lsx_vreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  long_op0 = 0x000000007ff00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007ff00000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000007ff00000;
+  __m128i_out = __lsx_vreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x000000000000001e;
+  *((unsigned long*)& __m128i_result[1]) = 0x1e1e1e1e1e1e1e1e;
+  *((unsigned long*)& __m128i_result[0]) = 0x1e1e1e1e1e1e1e1e;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000045eef14fe8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000000000ac;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_d(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x78c00000ff000000;
+  int_op1 = 0x0000000000000400;
+  *((unsigned long*)& __m128i_result[1]) = 0xff000000ff000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff000000ff000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x803f800080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe0404041c0404040;
+  int_op1 = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_result[0]) = 0xe0404041e0404041;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffff0001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_h(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  int_op1 = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000020006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_h(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffb4ff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffb4ff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffb4ff;
+  __m128i_out = __lsx_vreplve_d(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000020202020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_d(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x000000007ff00000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_d(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000020006;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_h(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffff4;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ffff00ff00ff00;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00ff00ff00ff00;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000000080000000;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ff0000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000001b;
+  int_op1 = 0xffffffff89e08000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001b0000001b;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_h(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefefefdbffefdfe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefefeeffef7fefe;
+  int_op1 = 0xffffffff9c0d6112;
+  *((unsigned long*)& __m128i_result[1]) = 0xbffefdfebffefdfe;
+  *((unsigned long*)& __m128i_result[0]) = 0xbffefdfebffefdfe;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff800000ff800000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[0]) = 0xff800000ff800000;
+  __m128i_out = __lsx_vreplve_w(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd27db010d20fbf;
+  int_op1 = 0x0000000000000040;
+  *((unsigned long*)& __m128i_result[1]) = 0x0fbf0fbf0fbf0fbf;
+  *((unsigned long*)& __m128i_result[0]) = 0x0fbf0fbf0fbf0fbf;
+  __m128i_out = __lsx_vreplve_h(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000090b0906;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_b(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0xffffffffffff8a35;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplve_d(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x05dfffc3ffffffc0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000047fe2f0;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000047fe2f0;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000047fe2f0;
+  __m128i_out = __lsx_vreplve_d(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffe011df03e;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xf03ef03ef03ef03e;
+  *((unsigned long*)& __m128i_result[0]) = 0xf03ef03ef03ef03e;
+  __m128i_out = __lsx_vreplve_h(__m128i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01fc020000fe0100;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000055555501;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000005555555554;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000005555555554;
+  __m128i_out = __lsx_vreplvei_d(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000036280000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x42a0000042a02000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd705c77a7025c899;
+  *((unsigned long*)& __m128i_result[1]) = 0xedfaedfaedfaedfa;
+  *((unsigned long*)& __m128i_result[0]) = 0xedfaedfaedfaedfa;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000300000003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000a0a08000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5350a08000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m128i_result[0]) = 0x8000800080008000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80010009816ac5de;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8001000184000bd8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0bd80bd80bd80bd8;
+  *((unsigned long*)& __m128i_result[0]) = 0x0bd80bd80bd80bd8;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_b(__m128i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_b(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_d(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1149a96eb1a08000;
+  *((unsigned long*)& __m128i_result[1]) = 0xb1a08000b1a08000;
+  *((unsigned long*)& __m128i_result[0]) = 0xb1a08000b1a08000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808080808080808;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_d(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffcc9a989a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_b(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000adadadad;
+  *((unsigned long*)& __m128i_result[1]) = 0xadadadadadadadad;
+  *((unsigned long*)& __m128i_result[0]) = 0xadadadadadadadad;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_b(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_d(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplvei_b(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3131313131313131;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_d(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a753500a9fa0d06;
+  *((unsigned long*)& __m128i_result[1]) = 0x0d060d060d060d06;
+  *((unsigned long*)& __m128i_result[0]) = 0x0d060d060d060d06;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_b(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vreplvei_w(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vreplvei_h(__m128i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ffffff000000ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffff000000ff00;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0a00000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0141010101410101;
+  *((unsigned long*)& __m128i_result[1]) = 0x4101010141010100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001580000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsll_v(__m128i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000040100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010000;
+  __m128i_out = __lsx_vbsrl_v(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x003fffffff000000;
+  __m128i_out = __lsx_vbsrl_v(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0005fe0300010101;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe03000101010000;
+  __m128i_out = __lsx_vbsrl_v(__m128i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vbsrl_v(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xd3259a2984048c23;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf9796558e39953fd;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000d3259a;
+  __m128i_out = __lsx_vbsrl_v(__m128i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001802041b0013;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001802041b0013;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000020100;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff000000ff;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc002000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00003ff000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000fffc00000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xf4b6f3f52f4ef4a8;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03574e3a62407e03;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m128i_result[1]) = 0x03574e3a03574e3a;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000001fe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x10f917d72d3d01e4;
+  *((unsigned long*)& __m128i_op1[0]) = 0x203e16d116de012b;
+  *((unsigned long*)& __m128i_result[1]) = 0x00f900d7003d00e4;
+  *((unsigned long*)& __m128i_result[0]) = 0x003e00d100de002b;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000000;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000003a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100000015;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc2f9bafac2fac2fa;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbdf077eee7e20468;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe3b1cc6953e7db29;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000e7e20468;
+  *((unsigned long*)& __m128i_result[0]) = 0xc2fac2fa53e7db29;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_op1[1]) = 0x803f800080000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe0404041c0404040;
+  *((unsigned long*)& __m128i_result[1]) = 0xe0404041e0404041;
+  *((unsigned long*)& __m128i_result[0]) = 0x803f800080000000;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfe80000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00fe000000000000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfe80ffffffffff02;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe80ff80ffff0000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xf8f8e018f8f8e810;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf8f8f008f8f8f800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000e0180000e810;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000f0080000f800;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1211100f11100f0e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x100f0e0d0f0e0d0c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f000d200e000c20;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0f000d200e000c20;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x11000f2000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0f000d2000000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe3e3e3e3e3e3e3e3;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe3e3e3e3e3e3e3e3;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xe3e3e3e3e3e3e3e3;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f804f804f804f80;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00007ffe00007ffe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000001c00ffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x000001000f00fe00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000017fff00fe7f;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000f0009d3c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000016fff9d3d;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffff000f0008d3c;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff0016fff8d3d;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff000000003c3c;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff0101ffff3d3d;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000958affff995d;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000958affff995d;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000c000ffffc000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000006f00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000c00000000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m128i_result[1]) = 0x40f0001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffcfffcfffcfffc;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2222272011111410;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2222272011111410;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000002000000020;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100013fa0;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffef8;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffdfffdfffdffee0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffffffdfffdf;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffefffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffefffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffefefffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002fffefffd0001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1202120212021202;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1202120212021202;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0202fe02fd020102;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010100000100000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100000101000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010001000000010;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e19181716;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01203f1e3d1c3b1a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3918371635143312;
+  *((unsigned long*)& __m128i_result[1]) = 0x21011f3f193d173b;
+  *((unsigned long*)& __m128i_result[0]) = 0xff39ff37ff35ff33;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5a6f5c53ebed3faa;
+  *((unsigned long*)& __m128i_op0[0]) = 0xa36aca4435b8b8e1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x5a6f5c53ebed3faa;
+  *((unsigned long*)& __m128i_op1[0]) = 0xa36aca4435b8b8e1;
+  *((unsigned long*)& __m128i_result[1]) = 0x5c535c533faa3faa;
+  *((unsigned long*)& __m128i_result[0]) = 0xca44ca44b8e1b8e1;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fbf3fbf;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7ff8;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff3fbfffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fffffff7fffffff;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x77c0404a4000403a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x77c03fd640003fc6;
+  *((unsigned long*)& __m128i_result[1]) = 0x04c0044a0400043a;
+  *((unsigned long*)& __m128i_result[0]) = 0x04c004d6040004c6;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000006362ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000d0000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000dffff000d;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff80806362;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00008080;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000000000ff;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000002002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000100000000fc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000100000000fc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010000000000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0404050404040404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0404050404040404;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000004040504;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000004040504;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0002000200020002;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000208000002080;
+  *((unsigned long*)& __m128i_result[1]) = 0x2080208020802080;
+  *((unsigned long*)& __m128i_result[0]) = 0x2080208020802080;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000807f80808000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80006b0000000b00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000807f00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x80006b0080808080;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000400000004000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00004000ffffffff;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000080008;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000000b;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000001b0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000001b0000;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001400000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000053a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff9000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc000400000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffc000400000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000800000000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001f00000000;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffefffe00000000;
+  __m128i_out = __lsx_vpackod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x00cd006300cd0063;
+  *((unsigned long*)& __m128i_result[0]) = 0x00cd006300cd0063;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000aa822a79308f6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x03aa558e1d37b5a1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ff80fd820000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000aa822a79308f6;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000084d12ce;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2e34594c3b000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x002e0059003b0000;
+  __m128i_out = __lsx_vpackod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe593c8c4e593c8c4;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080000080800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x9380c4009380c400;
+  __m128i_out = __lsx_vpackev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc2007aff230027;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0080005eff600001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01017f3c00000148;
+  *((unsigned long*)& __m128i_op1[0]) = 0x117d7f7b093d187f;
+  *((unsigned long*)& __m128i_result[1]) = 0xff23002700000148;
+  *((unsigned long*)& __m128i_result[0]) = 0xff600001093d187f;
+  __m128i_out = __lsx_vpackev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0002711250a27112;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00d2701294027112;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff7112ffff7112;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff7012ffff7112;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb020302101b03;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_op1[1]) = 0x30eb020302101b03;
+  *((unsigned long*)& __m128i_op1[0]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_result[1]) = 0x020310d0c0030220;
+  *((unsigned long*)& __m128i_result[0]) = 0x020310d0c0030220;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001e001e001e001e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001e001e001e001e;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffaeffaeffaeffae;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffaeffaeffaeffae;
+  *((unsigned long*)& __m128i_result[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_result[0]) = 0x001effae001effae;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000440efffff000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000003b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000440efffff000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff2356fe165486;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5efeb3165bd7653d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff2356fe165486;
+  __m128i_out = __lsx_vpackod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000eefff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf8e1a03affffe3e2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000eefff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf8e1a03affffe3e2;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000efffefff;
+  *((unsigned long*)& __m128i_result[0]) = 0xa03aa03ae3e2e3e2;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000cecd00004657;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000c90000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00019d9a00008cae;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000000;
+  __m128i_out = __lsx_vpackod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x02b010f881a281a2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b169bbb8140001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000010f8000081a2;
+  *((unsigned long*)& __m128i_result[0]) = 0x000069bb00000001;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpackev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xc2409edab019323f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x460f3b393ef4be3a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x460f3b393ef4be3a;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x4);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0004007c00fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000fc0000;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff0000ff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x01fc020000fe0100;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000463fd2902d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5ccd54bbfcac806c;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000001;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vpickev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x697eba2bedfa9c82;
+  *((unsigned long*)& __m128i_op0[0]) = 0xd705c77a7025c899;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x2700000000002727;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000ffff;
+  __m128i_out = __lsx_vpickev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xedfaedfaedfaedfa;
+  *((unsigned long*)& __m128i_op0[0]) = 0xedfaedfaedfaedfa;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xedfaedfaedfaedfa;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf436f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf4b6f3f52f4ef4a8;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080000000000000;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4811fda96793b23a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8f10624016be82fd;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfda9b23a624082fd;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff0000;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xaaaaffebcfb748e0;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfd293eab528e7ebe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffeb48e03eab7ebe;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400040004000400;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff51cf8da;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffd6040188;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000101fffff8b68;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000b6fffff8095;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffff51cffffd604;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffff7;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007d3ac600;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x7);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080001300000013;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dffbfff00000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0200400000000001;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000003fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003fffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000490000004d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001ffffffff;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffe5;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00a6ffceffb60052;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xa);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff0cffffff18;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff84fff4ff84fff4;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00a6ffceffb60052;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xa);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff0cffffff18;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefffefffeff6a0c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc2f9bafac2fac2fa;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffefefe6a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c2bac2c2;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((int*)& __m128_op0[3]) = 0x00000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xc);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_result = 0x00000000ffffffff;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0xbfffbfffbfffbffe;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x11000f2010000e20;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0f000d200e000c20;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x11000f200f000d20;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128d_op0[1]) = 0x7fff7fff7fff7fff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8006000080020000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3f8000003f800000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x3f8000003f800000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000805;
+  *((unsigned long*)& __m128i_op0[0]) = 0x978d95ac768d8784;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000104000800;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000897957687;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000408;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0bd80bd80bd80bd8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0bd80bd80bd80bd8;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x80000000b57ec564;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x8);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000c7fff000c;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xb);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000bd3d;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffff00010000fff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffff00010000fff;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000100c6ffef10c;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffff01;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ff91fffffff5;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff00650001ffb0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffffffff0001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x2020202020207f7f;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ca02f854;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ca02f854;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ca0200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ca0200000000;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000120002000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x2000200000013fa0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000013fa0;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000f7d1000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x773324887fffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000017161515;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000095141311;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000017fffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x1716151595141311;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000c6c6ee22;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c6c62e8a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000c6c6ee22;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000c6c62e8a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003dbe88077c78c1;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000003a24;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m128i_result[0]) = 0x4040404040404040;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000dfa6e0c6;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000d46cdc13;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ef400ad21fc7081;
+  *((unsigned long*)& __m128i_op1[0]) = 0x28bf0351ec69b5f2;
+  *((unsigned long*)& __m128i_result[1]) = 0xdfa6e0c6d46cdc13;
+  *((unsigned long*)& __m128i_result[0]) = 0x21fc7081ec69b5f2;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0x9);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000080000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x21201f1e1d001b1a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1918171615141312;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x21201f1e19181716;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000ff000000ff;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0002000200000000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffe500ffffc085;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x04c0044a0400043a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x04c004d6040004c6;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m128i_result[1]) = 0x044a043a04d604c6;
+  *((unsigned long*)& __m128i_result[0]) = 0x0004000400040004;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002000000000007;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0006000000040000;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x6363636363636363;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xb);
+  *((unsigned long*)& __m128i_op0[1]) = 0x003fffff00000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000200000002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000200000002000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffe080f6efc100f7;
+  *((unsigned long*)& __m128i_op0[0]) = 0xefd32176ffe100f7;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000010000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00001b4a00007808;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00001b4a00007808;
+  *((unsigned long*)& __m128i_result[1]) = 0x00001b4a00007808;
+  *((unsigned long*)& __m128i_result[0]) = 0x00001b4a00007808;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x3fc03fc000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ffff00010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x3fc03fc000000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3fc03fc000000004;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f801fe000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3fc03fc000000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3fc03fc000000003;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7f7f1fd800000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f1f00003f3f0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x3f3f00007f1f0000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff9f017f1fa0b199;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1197817fd839ea3e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000033;
+  *((unsigned long*)& __m128i_result[1]) = 0xff011fb11181d8ea;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000080808000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffffffff;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffc0ff80ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000005;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x80008000ec82ab51;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8000800089e08000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffefffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x8);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x1);
+  *((unsigned long*)& __m128d_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000103030102ffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000010102ffff;
+  __m128i_out = __lsx_vpickev_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x5);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x67eb85afb2ebb000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc8847ef6ed3f2000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000003ddc5dac;
+  *((unsigned long*)& __m128i_result[1]) = 0x67ebb2ebc884ed3f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000003ddc;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000003ddc5dac;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6453f5e01d6e5000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000fdec000000000;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x2);
+  *((unsigned long*)& __m128i_op0[1]) = 0x801dd5cb0004e058;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0xe);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000008;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+  *((unsigned long*)& __m128i_op0[0]) = 0x39c51f389c0d6112;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8d78336c83652b86;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x05fafe0101fe000e;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x4);
+  *((unsigned long*)& __m128i_op0[1]) = 0xe2560afe9c001a18;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000600007fff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x30eb022002101b20;
+  *((unsigned long*)& __m128i_op0[0]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x020310edc003023d;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x021b7d2449678a35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030298a621030a49;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x4);
+  *((unsigned long*)& __m128d_op0[1]) = 0x00000000abba7980;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_result[1]) = 0x004d004d004d004d;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0674c8868a74fc80;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfdce8003090b0906;
+  int_out = __lsx_vpickve2gr_w(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000008686;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01533b5e7489ae24;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffab7e71e33848;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x3b5eae24ab7e3848;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_h(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000c6c60000c6c6;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000feff23560000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000fd1654860000;
+  unsigned_int_out = __lsx_vpickve2gr_bu(__m128i_op0,0xc);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000056000056;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000000003e2;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f00004f4f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4f4f00004f4f0000;
+  unsigned_int_out = __lsx_vpickve2gr_wu(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x4f4f4f4f4f4f4f4f;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000009c83e21a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000022001818;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000e21a00001818;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f4f4f4f4f4f4f4f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4f4f4f4f4f4f4f4f;
+  __m128i_out = __lsx_vpickev_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000120000000d;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000e0000000e;
+  unsigned_long_int_out = __lsx_vpickve2gr_du(__m128i_op0,0x0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffffffffffff;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000ebd20000714f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00012c8a0000a58a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000010000;
+  __m128i_out = __lsx_vpickod_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vpickod_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000b0000000b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000201000000000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000201000000000b;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffefffefffffffc;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffcff;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7404443064403aec;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000d6eefefc0498;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff7f800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x2d1da85b7f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x002d001dd6a8ee5b;
+  *((unsigned long*)& __m128i_result[0]) = 0xfe7ffc8004009800;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001000000010;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000c0000bd49;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000c7fff000c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1000100010001000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000c7fff000c;
+  *((unsigned long*)& __m128i_result[0]) = 0x1000100010001000;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00ff00ff0000007f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001e8e1d8;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000e400000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000001e8e1d8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000e400000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000e4e4;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000101;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0008000000000000;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lsx_vpickve2gr_d(__m128i_op0,0x1);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000ffffffe0;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffe0;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbafebb00ffd500fe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff80005613;
+  *((unsigned long*)& __m128i_op1[0]) = 0x007f800000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000807f80808000;
+  *((unsigned long*)& __m128i_result[0]) = 0x80006b0000000b00;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000080808000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0080008000800080;
+  *((unsigned long*)& __m128i_result[0]) = 0x0080006b0000000b;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xc0808000c0808000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xc080800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xc080800000000000;
+  __m128i_out = __lsx_vilvl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff010300ff0103;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007ffff001000300;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff0001000300;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x007f00ff00ff00fe;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_vpickve2gr_b(__m128i_op0,0x8);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0014001400140000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001400000000;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00009c7c00007176;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000009c007c00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000071007600;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x9c9c9c9c00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000060002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000060002;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe4c8b96e2560afe9;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc001a1867fffa207;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000c0010000a186;
+  *((unsigned long*)& __m128i_result[0]) = 0x00067fff0002a207;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000014414104505;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1011050040004101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000014414104505;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1011050040004101;
+  *((unsigned long*)& __m128i_result[1]) = 0x1010111105050000;
+  *((unsigned long*)& __m128i_result[0]) = 0x4040000041410101;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffac5cffffac5c;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffac5cffffac5c;
+  *((unsigned long*)& __m128i_op1[1]) = 0x010169d9010169d9;
+  *((unsigned long*)& __m128i_op1[0]) = 0x01010287010146a1;
+  *((unsigned long*)& __m128i_result[1]) = 0xff01ff01ac025c87;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01ff01ac465ca1;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff01ff01ac025c87;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01ff01ac465ca1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff01ff0100000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xac465ca100000000;
+  __m128i_out = __lsx_vilvl_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000eefff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf8e1a03affffe3e2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000246d9755;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002427c2ee;
+  *((unsigned long*)& __m128i_result[1]) = 0xf8e10000a03a0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff2427e3e2c2ee;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffe4ffe4ffe4ffe4;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffe4ffe4ffe4ffe4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000011ff040;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_result[0]) = 0xff01e41ffff0e440;
+  __m128i_out = __lsx_vilvl_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff00e400ff00e400;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff01e41ffff0ffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff01ffffe41f0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xfff00000ffff0000;
+  __m128i_out = __lsx_vilvl_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007fffff00000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6a1a3fbb3c90260e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8644000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xaed495f03343a685;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffbe6ed563;
+  *((unsigned long*)& __m128i_result[1]) = 0x8644ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000fffe;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff0000ffff0000;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001300000013;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000e13;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000e13;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000a000a00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x000a000a00000000;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00ff00ff00ff00;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000004f804f80;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000004f804f80;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x004f0080004f0080;
+  *((unsigned long*)& __m128i_result[0]) = 0x004f0080004f0080;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ffa7f8ff81;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000003f0080ffc0;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000007fff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000a7f87fffff81;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x8000ffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x8000ffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000080003f80ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m128i_op0[0]) = 0x202020202020ff20;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m128i_result[0]) = 0x2000200020002000;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0808ffff0808ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0808ffff0808ffff;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ff00ff00ff00ff;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000157;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010058;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff0000ffff;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000200;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0002008360500088;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000008;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_h(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000f3040705;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000f3040705;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_w(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xff00ff00ff00ff00;
+  __m128i_out = __lsx_vilvh_b(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vilvh_d(__m128i_op0,__m128i_op1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000007f00000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100000004;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000000007f0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m128i_result[0]) = 0x0404040404000404;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000029;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000000000002f;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000029;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff00;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7efefefe82010201;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x418181017dfefdff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffff81;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op1[0]) = 0x52525252adadadad;
+  *((unsigned long*)& __m128i_op2[1]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x800000007fffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x00adadad00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00adadad00000000;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbfd10d0d7b6b6b73;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc5c534920000c4ed;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xedededededededed;
+  *((unsigned long*)& __m128i_result[0]) = 0xedededededededed;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m128i_op1[1]) = 0x04040403fafafafc;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000000ff80;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080808080;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000001a0000000b;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000080000000ff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xff6cffb5ff98ff6e;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffd7ff8dffa4ff7a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x34947b4b11684f92;
+  *((unsigned long*)& __m128i_op1[0]) = 0xee297a731e5c5f86;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffc0000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000868686868686;
+  __m128i_out = __lsx_vshuf_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000d000d000d000d;
+  *((unsigned long*)& __m128i_result[0]) = 0x000d000d000d000d;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002bfd9461;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000300037ff000ff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0003000300a10003;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000300037ff000ff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0003000300a10003;
+  *((unsigned long*)& __m128i_op2[1]) = 0x000000007ff000ff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0909000009090000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0909000009090000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0909000009090000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0909000009090000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x002a05a2f059094a;
+  *((unsigned long*)& __m128i_op2[0]) = 0x05ad3ba576eae048;
+  *((unsigned long*)& __m128i_result[1]) = 0x0909e0480909e048;
+  *((unsigned long*)& __m128i_result[0]) = 0x0909e0480909e048;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000000c0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000001ffffff29;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x00000000000000c0;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00000001ffffff29;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffff2900000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000100000001;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1f54e0ab00000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op2[0]) = 0x010101fe0101fe87;
+  *((unsigned long*)& __m128i_result[1]) = 0x0101fe870101fe87;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101fe8700000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000020000020;
+  *((unsigned long*)& __m128i_result[1]) = 0x2000002000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x2000002020000020;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000004870ba0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_op2[1]) = 0x8000000100000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x8000000000000103;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000010300000103;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000010300000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000ff0000857a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x05fafe0101fe000e;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff00000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xada4808924882588;
+  *((unsigned long*)& __m128i_op0[0]) = 0xacad25090caca5a4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_op1[0]) = 0x030298a6a1030a49;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_w(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xdfa6e0c6d46cdc13;
+  *((unsigned long*)& __m128i_op0[0]) = 0x21fc7081ec69b5f2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000002c002400;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffb96bffff57c9;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffff6080ffff4417;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffff0015172b;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffff0015172b;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff0015172b;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000002000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xf0003000f0003000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x021b7d2449678a35;
+  *((unsigned long*)& __m128i_op0[0]) = 0x030298a621030a49;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op2[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_op2[0]) = 0x030298a6a1030a49;
+  *((unsigned long*)& __m128i_result[1]) = 0x021b7d24c9678a35;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f7f00007f7f0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f7f80807f7f8080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000fffe0000fffe;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffff10000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf_d(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000030000;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0xc9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0004007c00fc0000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x047c0404fc00fcfc;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x8a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007fffff00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xff00ff7f00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x85);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffff51cf8da;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffd6040188;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffff8f8dada;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff01018888;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x50);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x007d00c50177ac5b;
+  *((unsigned long*)& __m128i_op0[0]) = 0xac82aa88a972a36a;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000c5ac01015b;
+  *((unsigned long*)& __m128i_result[0]) = 0xaaacac88a3a9a96a;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x7c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000a0000000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000a00000009;
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0a0a000a0a0a00;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0a0a0009090900;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001000100;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000001000100;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00003f8000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x003f800000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x003f800000000000;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0xd2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x6c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x81);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000dffff000d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000ffffff;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x6b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x5f675e96e29a5a60;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x965f5e9660e25a60;
+  *((unsigned long*)& __m128i_result[0]) = 0xff7f7fffff7f7fff;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x34);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x131211101211100f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x11100f0e100f0e0d;
+  *((unsigned long*)& __m128i_result[1]) = 0x13101213120f1112;
+  *((unsigned long*)& __m128i_result[0]) = 0x110e1011100d0f10;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0xcb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000001000110;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000431f851f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000001011010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000043431f1f;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0xf0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xc0b4d1a5f8babad3;
+  *((unsigned long*)& __m128i_op0[0]) = 0xbbc8ecc5f3ced5f3;
+  *((unsigned long*)& __m128i_result[1]) = 0xd1c0c0a5baf8f8d3;
+  *((unsigned long*)& __m128i_result[0]) = 0xecbbbbc5d5f3f3f3;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0x7c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000454ffff9573;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000454ffff9573;
+  __m128i_out = __lsx_vshuf4i_b(__m128i_op0,0xa4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0xf3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0x2c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0xd2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x003f000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x007c000d00400000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000003f00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000007c00000040;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0x31);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0xb9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ffffffe00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x7fff00007fff0000;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0xcd);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff00000000ffff;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0x93);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000007f7f7f;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x007f007f00007f7f;
+  __m128i_out = __lsx_vshuf4i_h(__m128i_op0,0x58);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000080808000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000080808000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x8b);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_result[1]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffdfffdfffdfffd;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x7e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfefefefdbffefdfe;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfefefeeffef7fefe;
+  *((unsigned long*)& __m128i_result[1]) = 0xfef7fefebffefdfe;
+  *((unsigned long*)& __m128i_result[0]) = 0xfefefefdfefefeef;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x2d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x002a001a001a000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000002a001a;
+  *((unsigned long*)& __m128i_result[0]) = 0x001a000b00000000;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x78);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x98);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000010f8000081a2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000069bb00000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000001000010f8;
+  __m128i_out = __lsx_vshuf4i_w(__m128i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x44);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fffff800;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000fffff800;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x8a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000001f0a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000006f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x36);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffda6e;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffffe3d6;
+  *((unsigned long*)& __m128i_op1[1]) = 0xeeb1e4f4bc3763f3;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6f5edf5ada6fe3d7;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffffe3d6;
+  *((unsigned long*)& __m128i_result[0]) = 0xeeb1e4f4bc3763f3;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100200001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000100200001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xce23d33e43d9736c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x63b2ac27aa076aeb;
+  *((unsigned long*)& __m128i_result[1]) = 0x63b2ac27aa076aeb;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0xc8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000158;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0xc9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0xbf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x801d5de0000559e0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x77eb86788eebaf00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x89582bf870006860;
+  *((unsigned long*)& __m128i_op1[0]) = 0x89582bf870006860;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vshuf4i_d(__m128i_op0,__m128i_op1,0x94);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbf8000000000ffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xcf00000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffff00000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x92);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xc2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0x3d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0200020002000200;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0200020002000200;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffff02000200;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0c03e17edd781b11;
+  *((unsigned long*)& __m128i_op0[0]) = 0x342caf9be55700b5;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000040400000383;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffe000ffff1fff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0c03e17edd781b11;
+  *((unsigned long*)& __m128i_result[0]) = 0x342caf9bffff1fff;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0xcc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xc6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000a16316b0;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000063636363;
+  *((unsigned long*)& __m128i_op1[1]) = 0x16161616a16316b0;
+  *((unsigned long*)& __m128i_op1[0]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000a16316b0;
+  *((unsigned long*)& __m128i_result[0]) = 0x16161616a16316b0;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0xa7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfff489b693120950;
+  *((unsigned long*)& __m128i_op1[0]) = 0xfffc45a851c40c18;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffc45a851c40c18;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x48);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0xcc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000005d5d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x41);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffefefe6a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000c2bac2c2;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000fefefe6a;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000c2bac2c2;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x7c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7ffffffeffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4080808080808080;
+  *((unsigned long*)& __m128i_result[1]) = 0xff80ffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x7ffffffeffffffff;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xe6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000a0000000a;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000a00000009;
+  *((unsigned long*)& __m128i_result[1]) = 0x000a000a0000000a;
+  *((unsigned long*)& __m128i_result[0]) = 0x000a000a000a000a;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0xaf);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffff80000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x67);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x004fcfcfd01f9f9f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x9f4fcfcfcf800000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004fcfcfd01f9f9f;
+  *((unsigned long*)& __m128i_op1[0]) = 0x9f4fcfcfcf800000;
+  *((unsigned long*)& __m128i_result[1]) = 0x004f1fcfd01f9f9f;
+  *((unsigned long*)& __m128i_result[0]) = 0x9f4fcfcfcf800000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xda);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x75b043c4d17db125;
+  *((unsigned long*)& __m128i_op0[0]) = 0xeef8227b596117b1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_op1[0]) = 0x4f804f804f804f80;
+  *((unsigned long*)& __m128i_result[1]) = 0x75b043c4d17db125;
+  *((unsigned long*)& __m128i_result[0]) = 0xeef8227b4f8017b1;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000de32400;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x027c027c000027c0;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0x77);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363797c63996399;
+  *((unsigned long*)& __m128i_op0[0]) = 0x171f0a1f6376441f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x6363797c63990099;
+  *((unsigned long*)& __m128i_result[0]) = 0x171f0a1f6376441f;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0x94);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0bd80bd80bdfffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0bd80bd80bd80000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0bd80bd80bd80000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0xf9);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x41dfbe1f41e0ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffc2ffe000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000ffc100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x41dfbe1f41e0ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ffc100010001;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0xec);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xe93d0bd19ff0c170;
+  *((unsigned long*)& __m128i_op1[0]) = 0x5237c1bac9eadf55;
+  *((unsigned long*)& __m128i_result[1]) = 0x5237c1baffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x7d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffbd994889;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000000000a092444;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000890000000000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0x58);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000fea0000fffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff8607db959f;
+  *((unsigned long*)& __m128i_op1[0]) = 0xff0cff78ff96ff14;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000fea0000fffe;
+  *((unsigned long*)& __m128i_result[0]) = 0xff0cff78ff96ff14;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0xc2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01ef013f01e701f8;
+  *((unsigned long*)& __m128i_op1[0]) = 0x35bb8d32b2625c00;
+  *((unsigned long*)& __m128i_result[1]) = 0x00008d3200000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0xea);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8003000000020000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x4040ffffc0400004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8003000000020000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x64);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffffffff;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x74);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0001ffff9515;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ffff53d9;
+  *((unsigned long*)& __m128i_result[0]) = 0xff000001ffff9515;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0x67);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x80808080806b000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xf4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xc1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x71);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x82);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xd5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0xf3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xbbe5560400010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe7e5dabf00010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0xbbe5560400010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0xe7e5dabf00010001;
+  *((unsigned long*)& __m128i_result[1]) = 0xe7e5560400010001;
+  *((unsigned long*)& __m128i_result[0]) = 0xe7e5dabf00010001;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0xf3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x5d);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x24);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m128i_result[1]) = 0x0001000101010001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0001000100010001;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xb6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x975ca6046e2e4889;
+  *((unsigned long*)& __m128i_op1[0]) = 0x1748c4f9ed1a5870;
+  *((unsigned long*)& __m128i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x1748c4f9ed1a5870;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x6a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xfffffffffc606ec5;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000014155445;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0x76);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000024170000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000aa822a79308f6;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000024170000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000084d12ce;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000024170000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0x56);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffffffff;
+  __m128i_out = __lsx_vextrins_b(__m128i_op0,__m128i_op1,0xc5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000034;
+  *((unsigned long*)& __m128i_op1[1]) = 0x01017f3c00000148;
+  *((unsigned long*)& __m128i_op1[0]) = 0x117d7f7b093d187f;
+  *((unsigned long*)& __m128i_result[1]) = 0x117d7f7b093d187f;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000034;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x70);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x01533b5e7489ae24;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe519ab7e71e33848;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x01533b5e7489ae24;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffab7e71e33848;
+  __m128i_out = __lsx_vextrins_h(__m128i_op0,__m128i_op1,0xbc);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffff760386bdae46;
+  *((unsigned long*)& __m128i_op1[0]) = 0xc1fc7941bc7e00ff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000ffff7603;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0xc3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff2356fe165486;
+  *((unsigned long*)& __m128i_op1[1]) = 0x3a8000003a800000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x000ef0000000003b;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000003b0000ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff2356fe165486;
+  __m128i_out = __lsx_vextrins_w(__m128i_op0,__m128i_op1,0x70);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vextrins_d(__m128i_op0,__m128i_op1,0x8a);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00a300a300a300a3;
+  *((unsigned long*)& __m128i_result[0]) = 0x00a300a300a300a3;
+  __m128i_out = __lsx_vldi(1187);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0xfffffffffffffe15;
+  *((unsigned long*)& __m128i_result[0]) = 0xfffffffffffffe15;
+  __m128i_out = __lsx_vldi(3605);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0xecececececececec;
+  *((unsigned long*)& __m128i_result[0]) = 0xecececececececec;
+  __m128i_out = __lsx_vldi(1004);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff00ff00ff00;
+  __m128i_out = __lsx_vldi(-1686);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001effae001effae;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001effae001effae;
+  unsigned_int_out = __lsx_vpickve2gr_hu(__m128i_op0,0x3);
+  *((unsigned long*)& __m128i_result[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_result[0]) = 0x004d004d004d004d;
+  __m128i_out = __lsx_vldi(1101);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0000000a000000;
+  __m128i_out = __lsx_vldi(-3318);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x00ffff00ff00ff00;
+  *((unsigned long*)& __m128i_result[0]) = 0x00ffff00ff00ff00;
+  __m128i_out = __lsx_vldi(-1686);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_result[1]) = 0x0a0000000a000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a0000000a000000;
+  __m128i_out = __lsx_vldi(-3318);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff00000000;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000ffff0000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m128i_op0[0]) = 0xec68e3ef5a98ed54;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xf4b6f3f52f4ef4a8;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x195f307a5d04acbb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000001fc0000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0400040004000400;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffff01ff01;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xfffffffffffffe03;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffe03;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000006;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00005555aaabfffe;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff7fff;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x7f0101070101010f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000127f010116;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000002bfd9461;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xefffdffff0009d3d;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000010000c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x006ffffefff0000d;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000006f00001f0a;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000958affff995d;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000100010001007c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001000100010001;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000200000002;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000000ffc2f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00201df000000000;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ca0200000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ca0200000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xfff082f000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x003f000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0202fe02fd020102;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a0aa9890a0ac5f3;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x6363636363636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0x6368d2cd63636363;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808081;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0038d800ff000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00fffe00fffffe00;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x8000008000008080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080800000800080;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000002e8b164;
+  *((unsigned long*)& __m128i_op0[0]) = 0x199714a038478040;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffe00029f9f6061;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x007f008000ea007f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00ff00ff00ff00ff;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x687a8373f249bc44;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7861145d9241a14a;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff0000ffff0000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000007fff0018;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000001;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ffff0000ffff;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0010001000100010;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffffffe;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000004870ba0;
+  int_out = __lsx_bz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_d(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001fffe0001fffe;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xcd636363cd636363;
+  *((unsigned long*)& __m128i_op0[0]) = 0xcd636363cd636363;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000202020200;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000100;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xe593c8c4e593c8c4;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffff9727ffff9727;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffe79ffffba5f;
+  int_out = __lsx_bnz_w(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000021;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000080801030000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000080103040000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000011ffee;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000000dfff2;
+  int_out = __lsx_bnz_b(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xf784000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffff784;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xf784000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffff784;
+  int_out = __lsx_bz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffffffff009ff83f;
+  int_out = __lsx_bnz_h(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  int_out = __lsx_bz_v(__m128i_op0);
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vld((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldx((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xc3c3c3c3c3c3c3c3;
+  *((unsigned long*)& __m128i_result[0]) = 0xc3c3c3c3c3c3c3c3;
+  __m128i_out = __lsx_vldrepl_b((unsigned long *)&__m128i_op0, 0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0xc31ac31ac31ac31a;
+  *((unsigned long*)& __m128i_result[0]) = 0xc31ac31ac31ac31a;
+  __m128i_out = __lsx_vldrepl_h((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x47a5c31a47a5c31a;
+  *((unsigned long*)& __m128i_result[0]) = 0x47a5c31a47a5c31a;
+  __m128i_out = __lsx_vldrepl_w((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldrepl_d((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_result[0]) = 0x3ab7a3fc47a5c31a;
+  __m128i_out = __lsx_vldx((unsigned long *)&__m128i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0;
+  __lsx_vst(__m128i_op0, (unsigned long *)&__m128i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_op0, __m128i_result);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0;
+  __lsx_vstx(__m128i_op0, (unsigned long *)&__m128i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_op0, __m128i_result);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_b(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x5c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_h(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0xc9d85c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_w(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_out[1]) = 0x0;
+  *((unsigned long*)& __m128i_out[0]) = 0x0;
+  __lsx_vstelm_d(__m128i_op0, (unsigned long *)&__m128i_out, 0x0, 0x1);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1dcc4255c9d85c05;
+  *((unsigned long*)& __m128i_op0[0]) = 0x3ab7a3fc47a5c31a;
+  *((unsigned long*)& __m128i_result[1]) = 0x0;
+  *((unsigned long*)& __m128i_result[0]) = 0x0;
+  __lsx_vstx(__m128i_op0, (unsigned long *)&__m128i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m128i_op0, __m128i_result);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-str-manipulate.c b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-str-manipulate.c
new file mode 100644
index 00000000000..94f06dcaf47
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lsx/lsx-str-manipulate.c
@@ -0,0 +1,408 @@
+/* { dg-do run } */
+/* { dg-options "-mlsx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lsxintrin.h>
+int main ()
+{
+  __m128i __m128i_op0, __m128i_op1, __m128i_op2, __m128i_out, __m128i_result;
+  __m128 __m128_op0, __m128_op1, __m128_op2, __m128_out, __m128_result;
+  __m128d __m128d_op0, __m128d_op1, __m128d_op2, __m128d_out, __m128d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+  *((int*)& __m128_op0[3]) = 0x0000c77c;
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xfe07e5fefefdddfe;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00020100fedd0c00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0005000501800005;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0xfe07e5fefefdddfe;
+  *((unsigned long*)& __m128i_result[0]) = 0x00020100fedd0008;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0404038383838404;
+  *((unsigned long*)& __m128i_op2[1]) = 0x03ff03ff03ff03ff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000001;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000200010;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0e7ffffc01fffffc;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000003f803f4;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0e7ffffc01fffffc;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000003f803f4;
+  *((unsigned long*)& __m128i_result[1]) = 0x0e7ffffc01fffffc;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000001003f803f4;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000010;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000020000007d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000746400016388;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000586100015567;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x0800000200000002;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000020000007d;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m128i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[0]) = 0xffffffffffff0008;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x61608654a2d4f6da;
+  *((unsigned long*)& __m128i_result[1]) = 0x00000000ff08ffff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x36fbdfdcffdcffdc;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000008140c80;
+  *((unsigned long*)& __m128i_op2[1]) = 0x1f1f1f1f1f1f1f00;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1f1f1f27332b9f00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x36fbdfdcffdc0008;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000aaaa;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000545cab1d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000081a83bea;
+  *((unsigned long*)& __m128i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00d3007c014e00bd;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000aaaa;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000003a0000003a;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x37c0001000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x37c0001000000008;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m128i_result[0]) = 0x8080808080800008;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x1f1f1f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_op0[0]) = 0x1f1f1f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x1f1f1f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_op2[0]) = 0x1f1f1f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_result[1]) = 0x00081f1f1f1f1f1f;
+  *((unsigned long*)& __m128i_result[0]) = 0x1f1f1f1f1f1f1f1f;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000400080003fff;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000bc2000007e10;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000400080003fff;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000bc2000007e04;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0a753500950fa306;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffff14eb54ab;
+  *((unsigned long*)& __m128i_op1[0]) = 0x14ea6a002a406a00;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x00007fff7fff8000;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000a752a55;
+  *((unsigned long*)& __m128i_result[0]) = 0x0a753500950fa306;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x02b010f881a281a2;
+  *((unsigned long*)& __m128i_op0[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_op1[1]) = 0x02b010f881a281a2;
+  *((unsigned long*)& __m128i_op1[0]) = 0x27b169bbb8145f50;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x02b010f881a281a2;
+  *((unsigned long*)& __m128i_result[0]) = 0x27b169bbb8140001;
+  __m128i_out = __lsx_vfrstp_h(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m128i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op2[0]) = 0x0000000000000155;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff100000000000;
+  __m128i_out = __lsx_vfrstp_b(__m128i_op0,__m128i_op1,__m128i_op2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0027002a00030018;
+  *((unsigned long*)& __m128i_op0[0]) = 0x7f4300177f7a7f59;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0027002a00080018;
+  *((unsigned long*)& __m128i_result[0]) = 0x7f4300177f7a7f59;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000007f00000004;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000401000001;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0001000100000004;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000110000001;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000007f00000004;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000800000000;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x75b043c4d17db125;
+  *((unsigned long*)& __m128i_op0[0]) = 0xeef8227b4f8017b1;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x027c027c000027c0;
+  *((unsigned long*)& __m128i_result[1]) = 0x75b043c4007db125;
+  *((unsigned long*)& __m128i_result[0]) = 0xeef8227b4f8017b1;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m128i_op1[1]) = 0x03c0000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x03c0038000000380;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000ff000000ff00;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000010a000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00ffff0000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00ffff000000ff00;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00000000010a000b;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m128i_op0[0]) = 0x5b35342c979955da;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m128i_result[0]) = 0x5b35342c970455da;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0010000000000000;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0101010101010101;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x00d3012b015700bb;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0001002affca0070;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000001ca02f854;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000100013fa0;
+  *((unsigned long*)& __m128i_result[1]) = 0x00d3012b015700bb;
+  *((unsigned long*)& __m128i_result[0]) = 0x00010000ffca0070;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m128i_op1[1]) = 0x00000000000000bf;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000000002bb;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x00080000fffe0001;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000545cffffab1d;
+  *((unsigned long*)& __m128i_op0[0]) = 0xffff81a800003bea;
+  *((unsigned long*)& __m128i_op1[1]) = 0x13f9c5b60028a415;
+  *((unsigned long*)& __m128i_op1[0]) = 0x545cab1d81a83bea;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000545cffff0001;
+  *((unsigned long*)& __m128i_result[0]) = 0xffff81a800003bea;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m128i_result[0]) = 0x000000000000001b;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0008000000000000;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x379674c000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0xffffff7ffffffffe;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x379674c000000000;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x0000000000000000;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x001a001a001a000b;
+  *((unsigned long*)& __m128i_op0[0]) = 0x001a001a001a000b;
+  *((unsigned long*)& __m128i_op1[1]) = 0x001a001a001a000b;
+  *((unsigned long*)& __m128i_op1[0]) = 0x001a001a001a000b;
+  *((unsigned long*)& __m128i_result[1]) = 0x001a001a001a0008;
+  *((unsigned long*)& __m128i_result[0]) = 0x001a001a001a000b;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_op0[0]) = 0x02f3030303030303;
+  *((unsigned long*)& __m128i_op1[1]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_op1[0]) = 0x004d004d004d004d;
+  *((unsigned long*)& __m128i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m128i_result[0]) = 0x02f3030303100303;
+  __m128i_out = __lsx_vfrstpi_b(__m128i_op0,__m128i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  *((unsigned long*)& __m128i_op0[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_op0[0]) = 0x00007770ffff941d;
+  *((unsigned long*)& __m128i_op1[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_op1[0]) = 0x00007770ffff941d;
+  *((unsigned long*)& __m128i_result[1]) = 0x000000400000004c;
+  *((unsigned long*)& __m128i_result[0]) = 0x00007770ffff941d;
+  __m128i_out = __lsx_vfrstpi_h(__m128i_op0,__m128i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m128i_result, __m128i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/simd_correctness_check.h b/gcc/testsuite/gcc.target/loongarch/vector/simd_correctness_check.h
new file mode 100644
index 00000000000..7be199ee3a0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/simd_correctness_check.h
@@ -0,0 +1,39 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#define ASSERTEQ_64(line, ref, res)							\
+do{											\
+    int fail = 0;									\
+    for(size_t i = 0; i < sizeof(res)/sizeof(res[0]); ++i){				\
+	long *temp_ref = &ref[i], *temp_res = &res[i];					\
+	if(abs(*temp_ref - *temp_res) > 0){						\
+	    printf(" error: %s at line %ld , expected "#ref"[%ld]:0x%lx, got: 0x%lx\n",	\
+                   __FILE__, line, i, *temp_ref, *temp_res);				\
+			    fail = 1;							\
+	}										\
+    }											\
+    if(fail == 1) abort();								\
+}while(0) 
+
+#define ASSERTEQ_32(line, ref, res)							\
+do{											\
+    int fail = 0;									\
+    for(size_t i = 0; i < sizeof(res)/sizeof(res[0]); ++i){				\
+	int *temp_ref = &ref[i], *temp_res = &res[i];					\
+	if(abs(*temp_ref - *temp_res) > 0){						\
+	    printf(" error: %s at line %ld , expected "#ref"[%ld]:0x%x, got: 0x%x\n",	\
+		   __FILE__, line, i, *temp_ref, *temp_res);				\
+	    fail = 1;									\
+	}										\
+    }											\
+    if(fail == 1) abort();								\
+}while(0) 
+
+#define ASSERTEQ_int(line, ref, res)							\
+do{											\
+    if (ref != res){									\
+	printf(" error: %s at line %ld , expected %d, got %d\n",	  		\
+	       __FILE__, line, ref, res);						\
+    }											\
+}while(0) 
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 8/8] LoongArch: Add Loongson ASX directive test cases.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
                   ` (6 preceding siblings ...)
  2023-07-18 11:06 ` [PATCH v2 7/8] LoongArch: Add Loongson SX directive test cases Chenghui Pan
@ 2023-07-18 11:06 ` Chenghui Pan
  2023-07-18 12:26 ` [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Xi Ruoyao
  8 siblings, 0 replies; 11+ messages in thread
From: Chenghui Pan @ 2023-07-18 11:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: xry111, i, chenglulu, xuchenghua

From: Lulu Cheng <chenglulu@loongson.cn>

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/strict-align.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-bit-manipulate.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-builtin.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-cmp.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-fp-arith.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-fp-cvt.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-int-arith.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-mem.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-perm.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-str-manipulate.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-xvldrepl.c: New test.
	* gcc.target/loongarch/vector/lasx/lasx-xvstelm.c: New test.
---
 .../gcc.target/loongarch/strict-align.c       |    13 +
 .../vector/lasx/lasx-bit-manipulate.c         | 27813 +++++++++++
 .../loongarch/vector/lasx/lasx-builtin.c      |  1509 +
 .../loongarch/vector/lasx/lasx-cmp.c          |  5361 +++
 .../loongarch/vector/lasx/lasx-fp-arith.c     |  6259 +++
 .../loongarch/vector/lasx/lasx-fp-cvt.c       |  7315 +++
 .../loongarch/vector/lasx/lasx-int-arith.c    | 38361 ++++++++++++++++
 .../loongarch/vector/lasx/lasx-mem.c          |   147 +
 .../loongarch/vector/lasx/lasx-perm.c         |  7730 ++++
 .../vector/lasx/lasx-str-manipulate.c         |   712 +
 .../loongarch/vector/lasx/lasx-xvldrepl.c     |    13 +
 .../loongarch/vector/lasx/lasx-xvstelm.c      |    12 +
 12 files changed, 95245 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/loongarch/strict-align.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-bit-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-builtin.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-cmp.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-cvt.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-int-arith.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-mem.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-perm.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-str-manipulate.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvldrepl.c
 create mode 100644 gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvstelm.c

diff --git a/gcc/testsuite/gcc.target/loongarch/strict-align.c b/gcc/testsuite/gcc.target/loongarch/strict-align.c
new file mode 100644
index 00000000000..bcad2b84f68
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/strict-align.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-Ofast -mstrict-align -mlasx" } */
+/* { dg-final { scan-assembler-not "vfadd.s" } } */
+
+void
+foo (float* restrict x, float* restrict y)
+{
+  x[0] = x[0] + y[0];
+  x[1] = x[1] + y[1];
+  x[2] = x[2] + y[2];
+  x[3] = x[3] + y[3];
+}
+
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-bit-manipulate.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-bit-manipulate.c
new file mode 100644
index 00000000000..7a3c93a9437
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-bit-manipulate.c
@@ -0,0 +1,27813 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfefee00000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefee00000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefee00000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefee00000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000001c;
+  __m256i_out = __lasx_xvand_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff7fff7fff;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000005e02;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000089;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fe37fff001fffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fe37fff001fffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fffffff;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x003f60041f636003;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff1fff1fff1fff1;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff00ff;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800080ff800080ff;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff80007fff0000;
+  __m256i_out = __lasx_xvor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256i_result[2]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256i_result[1]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff0000000000000;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00c100c100c100c1;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00c100c100c100c1;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100000001000100;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000f91;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000f91;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000f90;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d20227a78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x132feeabd2d33b38;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x9fe7fffffffff32e;
+  *((unsigned long*)& __m256i_result[2]) = 0x6040190ddfdd8587;
+  *((unsigned long*)& __m256i_result[1]) = 0xecd011542d2cc4c7;
+  *((unsigned long*)& __m256i_result[0]) = 0x6040190dffffffff;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000101000001010;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvxor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x40d74f979f99419f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xbf28b0686066be60;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffff6ff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffff6ff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000900ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000900ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8888888808888888;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0888888888888888;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8888888808888888;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0888888888888888;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x77777777f7777777;
+  *((unsigned long*)& __m256i_result[2]) = 0xf777777777777777;
+  *((unsigned long*)& __m256i_result[1]) = 0x77777777f7777777;
+  *((unsigned long*)& __m256i_result[0]) = 0xf777777777777777;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40ff40ff40ff40ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x407b40ff40ff40f1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40ff40ff40ff40ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x407b40ff40ff40f1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x40ff40ff40ff40ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x407b40ff40ff40f1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x40ff40ff40ff40ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x407b40ff40ff40f1;
+  *((unsigned long*)& __m256i_result[3]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_result[2]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_result[1]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_result[0]) = 0xbf84bf00bf00bf0e;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffbdff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xa000a0009f80ffcc;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffbdff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xa000a0009f80ffcc;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[3]) = 0x6f6f6f6f6f6f6f6f;
+  *((unsigned long*)& __m256i_result[2]) = 0x6f6f6f6f6f6f6f6f;
+  *((unsigned long*)& __m256i_result[1]) = 0x6f6f6f6f6f6f6f6f;
+  *((unsigned long*)& __m256i_result[0]) = 0x6f6f6f6f6f6f6f6f;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff000300030000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffc000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff000300030000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffc000;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x800fffffffffffff;
+  __m256i_out = __lasx_xvnor_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1828f0e09bad7249;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07ffc1b723953cec;
+  *((unsigned long*)& __m256i_op0[1]) = 0x61f2e9b333aab104;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6bf742aa0d7856a0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000019410000e69a;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf259905a09c23be0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000883a00000f20;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6d3c2d3a89167aeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000090100008492;
+  *((unsigned long*)& __m256i_result[2]) = 0xf000104808420300;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000e20;
+  *((unsigned long*)& __m256i_result[0]) = 0x04082d108006284b;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007f;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x4);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffdfffdfffdfffd;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_result[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xefdfefdfefdfefdf;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d6d6d;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbf28b0686066be60;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x40d74f979f99419f;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01480000052801a2;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffdcff64;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffdaaaaffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000236200005111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000175e0000490d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000236200005111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000175e0000490d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffdfffffffdffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffddffdeffb5ff8d;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffdfffffffdffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffddffdeffb5ff8d;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00feffff00fe81;
+  *((unsigned long*)& __m256i_result[2]) = 0xfe01fe51ff00ff40;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00feffff00fe81;
+  *((unsigned long*)& __m256i_result[0]) = 0xfe01fe51ff00ff40;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0df9f8e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0df9f8e;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe0df9f8f;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe0df9f8f;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff7fffffff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff7fffffff7fff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdf80df80df80dfff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffdf80dfff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x498100814843ffe1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4981008168410001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x498100814843ffe1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4981008168410001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff896099cbdbfff1;
+  *((unsigned long*)& __m256i_result[2]) = 0xc987ffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff896099cbdbfff1;
+  *((unsigned long*)& __m256i_result[0]) = 0xc987ffffffffffff;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_result[2]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_result[0]) = 0x000020a4ffffbe4f;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0xffbffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_result[1]) = 0xffbffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffa;
+  __m256i_out = __lasx_xvorn_v(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0xe2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000505;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001175f10e4330e8;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff8f0842ff29211e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffff8d9ffa7103d;
+  *((unsigned long*)& __m256i_result[3]) = 0x001151510a431048;
+  *((unsigned long*)& __m256i_result[2]) = 0x5b0b08425b09011a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5b5b58595b031019;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x5b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[0]) = 0x0400040004000400;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x3f3f3f3900000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x3f3f3f3900000003;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[2]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[1]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[0]) = 0xbabababababababa;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0xba);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000404040004040;
+  *((unsigned long*)& __m256i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000404040004040;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x40);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff31;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5e5e5e5e5e5e5e1c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5e5e5e5e5e5e5e10;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x5e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x86);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f70000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x7f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0xa3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0x98);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0xd9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvandi_b(__m256i_op0,0xcc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_result[2]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_result[1]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_result[0]) = 0x6c6c6c6c6c6c6c6c;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x6c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff00fffffff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x9f9f9f9f9f9f9f9f;
+  *((unsigned long*)& __m256i_result[2]) = 0x9f9f9f9fffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x9f9f9f9f9f9f9f9f;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff9fffffffff;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x9f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x6a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff7effffff46;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff7effffff46;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x42);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[2]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[1]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[0]) = 0xbfbfbfbfbfbfbfbf;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0xbf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x2c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_result[2]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_result[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_result[0]) = 0x5252525252525252;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x52);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fe363637fe36363;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x63);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefeffe0e0e0;
+  *((unsigned long*)& __m256i_result[1]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefeffe0e0e0;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0xe0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_result[2]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_result[1]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_result[0]) = 0x6b6b6b6b6b6b6b6b;
+  __m256i_out = __lasx_xvori_b(__m256i_op0,0x6b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_result[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[2]) = 0xc2c2c2c2c2c29cc0;
+  *((unsigned long*)& __m256i_result[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[0]) = 0xc2c2c2c2c2c29cc0;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0xc2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1616161616161616;
+  *((unsigned long*)& __m256i_op0[2]) = 0x161616167fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe16167f161616;
+  *((unsigned long*)& __m256i_op0[0]) = 0x161616167fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xc7c7c7c7c7c7c7c7;
+  *((unsigned long*)& __m256i_result[2]) = 0xc7c7c7c7ae2e2e2e;
+  *((unsigned long*)& __m256i_result[1]) = 0xae2fc7c7aec7c7c7;
+  *((unsigned long*)& __m256i_result[0]) = 0xc7c7c7c7ae2e2e2e;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0xd1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_result[2]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_result[1]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_result[0]) = 0x5353535353535353;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x53);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6d6d6d6d6d6d6d6d;
+  *((unsigned long*)& __m256i_result[2]) = 0x6d6d6d6d6d6d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x6d6d6d6d6d6d6d6d;
+  *((unsigned long*)& __m256i_result[0]) = 0x6d6d6d6d6d6d6d6d;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x6d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_result[2]) = 0x8e8e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_result[1]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_result[0]) = 0x8e8e8e8e8e8e8e8e;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x71);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[0]) = 0x7575757575757575;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x75);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xa4a4a4a4a4a4a4a4;
+  *((unsigned long*)& __m256i_result[2]) = 0xa4a4a4a4a4a4a4a4;
+  *((unsigned long*)& __m256i_result[1]) = 0xa4a4a4a4a4a4a4a4;
+  *((unsigned long*)& __m256i_result[0]) = 0xa4a4a4a4a4a4a4a4;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0xa4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_result[2]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_result[1]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_result[0]) = 0xa1a1a1a15e5e5e5e;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0xa1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_result[2]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_result[1]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_result[0]) = 0x8d8d72728d8d8d8d;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x8d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xb3b3b3b3b3b3b3b3;
+  *((unsigned long*)& __m256i_result[2]) = 0xb3b3b3b3b3b3b3b3;
+  *((unsigned long*)& __m256i_result[1]) = 0xb3b3b3b3b3b3b3b3;
+  *((unsigned long*)& __m256i_result[0]) = 0xb3b3b3b3b3b3b3b3;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x4c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f0000ff807f81;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f0000ff807f81;
+  *((unsigned long*)& __m256i_result[3]) = 0x5d5d5d5d5d22a2a2;
+  *((unsigned long*)& __m256i_result[2]) = 0xa2dda2a25d22dd23;
+  *((unsigned long*)& __m256i_result[1]) = 0x5d5d5d5d5d22a2a2;
+  *((unsigned long*)& __m256i_result[0]) = 0xa2dda2a25d22dd23;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0xa2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256i_result[2]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256i_result[1]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256i_result[0]) = 0xd3d3d3d3d3d3d3d3;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0xd3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_result[3]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_result[2]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_result[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_result[0]) = 0x8768876887688769;
+  __m256i_out = __lasx_xvxori_b(__m256i_op0,0x7d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_result[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_result[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_result[0]) = 0x45c5c5c545c5c5c5;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x3a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007773;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000003373;
+  *((unsigned long*)& __m256i_result[3]) = 0xbbbbbbbbbbbbbbbb;
+  *((unsigned long*)& __m256i_result[2]) = 0xbbbbbbbbbbbb8888;
+  *((unsigned long*)& __m256i_result[1]) = 0xbbbbbbbbbbbbbbbb;
+  *((unsigned long*)& __m256i_result[0]) = 0xbbbbbbbbbbbb8888;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x44);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7f7f7f7f7f7f7f7;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_result[2]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_result[1]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_result[0]) = 0xdededededededede;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x33);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[0]) = 0x9090909090909090;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x6f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0xf7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x5858585858585858;
+  *((unsigned long*)& __m256i_result[2]) = 0x5858585858585858;
+  *((unsigned long*)& __m256i_result[1]) = 0x5858585858585858;
+  *((unsigned long*)& __m256i_result[0]) = 0x5858585858585858;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0xa7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_result[2]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_result[1]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_result[0]) = 0x3d3d3d3d3d3d3d3d;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0xc2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x9d9d9d9d9d9d9d8d;
+  *((unsigned long*)& __m256i_result[2]) = 0x9d9d9d9d9d9d9d9d;
+  *((unsigned long*)& __m256i_result[1]) = 0x9d9d9d9d9d9d9d8d;
+  *((unsigned long*)& __m256i_result[0]) = 0x9d9d9d9d9d9d9d9d;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x62);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[2]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[1]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[0]) = 0x2a2a2a2a2a2a2a2a;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0xd5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000081220000812c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000812000008120;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000081220000812c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000812000008120;
+  *((unsigned long*)& __m256i_result[3]) = 0xe9e968c9e9e968c1;
+  *((unsigned long*)& __m256i_result[2]) = 0xe9e968c9e9e968c9;
+  *((unsigned long*)& __m256i_result[1]) = 0xe9e968c9e9e968c1;
+  *((unsigned long*)& __m256i_result[0]) = 0xe9e968c9e9e968c9;
+  __m256i_out = __lasx_xvnori_b(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00001f41ffffbf00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001f41ffffbf00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe0000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000fffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000fffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000808080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff2f7bcfff2f7bd;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff2f93bfff2fff2;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff2f7bcfff2f7bd;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff2f93bfff2fff2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcf800fffcfffc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffcfffc;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff0e400;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5980000000000000;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x8080808080808080;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001ff8000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001ff8000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsll_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800000000000;
+  __m256i_out = __lasx_xvsll_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800000ff800000ff;
+  __m256i_out = __lasx_xvsll_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsll_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_result[2]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_result[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_result[0]) = 0x6580668200fe0002;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff874dc687870000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_result[0]) = 0xff874dc687870000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ffffff00ffff;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080000000800;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff0000;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000008e4bfc4eff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001ffee10000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000008e4bfc4eff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001ffee10000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0d0d0d000000000d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0d0d0d0000060d0d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0d0d0d000000000d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d0d0d0000060d0d;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff03ffffff07;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff03ffffff07;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000040404000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000040404000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000040004000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000040404040;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe01fe01fd02fd02;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe01fe01fd02fd02;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000405;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000405;
+  *((unsigned long*)& __m256i_result[3]) = 0xfe01fe017e81fd02;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fc001fe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfe01fe017e81fd02;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000003fc001fe;
+  __m256i_out = __lasx_xvsrl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000003ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000003ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ffffffffffff;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001ffff8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001ffff8000;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_result[2]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_result[1]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_result[0]) = 0xfd02fd02fd02fd02;
+  __m256i_out = __lasx_xvsrl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0005fff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0x04f004f204f204f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0005fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x04f004f204f204f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000002780;
+  __m256i_out = __lasx_xvsrl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc5890a0a07070707;
+  *((unsigned long*)& __m256i_op1[2]) = 0x006be0e4180b8024;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1b399540334c966c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x71d7dd7aefcac001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe651bfff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe651bfff;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe0000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000bf6e0000c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000030000fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000bf6e0000c916;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000030000fff3;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff0e400;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1cfd000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000017e007ffe02;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_result[2]) = 0x6161616100000018;
+  *((unsigned long*)& __m256i_result[1]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_result[0]) = 0x6161616100000018;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f007f0081007f;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01ae00ff00ff;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fe1ffe0ffe1ffe0;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0007000700070007;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d20227a78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x132feeabd2d33b38;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000c0300000019a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0c08032100004044;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000265ffa5a6767;
+  *((unsigned long*)& __m256i_result[0]) = 0x0c08032100000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f433c78;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00feff0100feff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00feff0100feff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800300000000;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000017fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff000000017fff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000f00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000f00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff0001ff02;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff020afefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000003fefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffefff7fff7;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7ffffffbfffb;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff0001ff02;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff020afefc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000003fefd;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0209fefb08140000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff0001ff04;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff02a0fefc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000cfefd;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff01ff010000fff9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff19;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff02ff020001fffa;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000100010001fffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x807f807f00000380;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007380;
+  *((unsigned long*)& __m256i_result[1]) = 0xc03fc03f000001c0;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001c0;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffe40;
+  __m256i_out = __lasx_xvrotr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrotr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000fedd;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000fedd;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000fedd;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000fedd;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x80be0000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x80be0000ffffffff;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000457d607d;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff457d607f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000457d607d;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff457d607f;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07ffffff07ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07ffffff07ffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x07ffffff07ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x07ffffff07ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0ffffffe0ffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0ffffffe0ffffffe;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_result[2]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_result[0]) = 0x001fc0200060047a;
+  __m256i_out = __lasx_xvrotr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ca0000fff80000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ca0000fff80000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x381800007af80000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x381800007af80000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000086fe0000403e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000403e00004040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000086fe0000403e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000403e00004040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000086fe0000403e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000403e00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000086fe0000403e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000403e00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001bfa000000f9;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000f900004040;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001bfa000000f9;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000f900004040;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0607ffff0383;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0607ffffc0c1;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0607ffff0383;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0607ffffc0c1;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvrotr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007f433c79;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007f433c79;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007f8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007f8000;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffdfff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffdfff80;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fa022a01a401e5;
+  *((unsigned long*)& __m256i_op0[2]) = 0x030d03aa0079029b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x024c01f901950261;
+  *((unsigned long*)& __m256i_op0[0]) = 0x008102c2008a029f;
+  *((unsigned long*)& __m256i_result[3]) = 0x54000000ca000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5400000036000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf2000000c2000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x840000003e000000;
+  __m256i_out = __lasx_xvslli_w(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff1001100100000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff1001100100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfcc4004400400000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0040400000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfcc4004400400000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0040400000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffef000004ea;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffef000004ea;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffefffffffef;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffbf4;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_result[3]) = 0xf800f800f800c000;
+  *((unsigned long*)& __m256i_result[2]) = 0xf800f800f800a000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_result[0]) = 0xf800f800f800e000;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffefefffffefe;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0100010001000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100010001000000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xf000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_w(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1616161616161616;
+  *((unsigned long*)& __m256i_op0[2]) = 0x161616167fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe16167f161616;
+  *((unsigned long*)& __m256i_op0[0]) = 0x161616167fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_result[2]) = 0x2c2c2c2cfefefefe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefc2c2cfe2c2c2c;
+  *((unsigned long*)& __m256i_result[0]) = 0x2c2c2c2cfefefefe;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m256i_result[2]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m256i_result[1]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m256i_result[0]) = 0xf8f8f8f8f8f8f8f8;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1f60000000c00000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1f60000000c00000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x60000000c0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x60000000c0000000;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff80ff80ff80ff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff80ff80ff80ff80;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000008000000080;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00080008000801ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00080008000801ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_result[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_result[0]) = 0xf0f0f0f0f0f0f0f0;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_w(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_w(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x03f0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x03f0000000000000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x34);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_w(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffff80000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffff80000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefefefefefe;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[2]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[1]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[0]) = 0xf800f800f800f800;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0404000004040000;
+  __m256i_out = __lasx_xvslli_w(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000c040c0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000c040c0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff000000;
+  __m256i_out = __lasx_xvslli_d(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslli_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000050005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1010101110101011;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1111111211111112;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004444;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x2e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x3e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffcc8000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007dfdff4b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x003ffff300000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000001f7f7f;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9240f24a84b18025;
+  *((unsigned long*)& __m256i_op0[2]) = 0x9240f24a84b18025;
+  *((unsigned long*)& __m256i_op0[1]) = 0xb2c0b341807f8006;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb2c0b341807f8006;
+  *((unsigned long*)& __m256i_result[3]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_result[2]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_result[1]) = 0x00b200b300800080;
+  *((unsigned long*)& __m256i_result[0]) = 0x00b200b300800080;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ffff0001ffff;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffcb423a587053;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6d46f43e71141b81;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffcb423a584528;
+  *((unsigned long*)& __m256i_op0[0]) = 0x9bdf36c8d78158a1;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000007fffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000036a37;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000007fffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000004def9;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0889088908810881;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0081010000810100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0889088900810088;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0081010000810100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004448444844084;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000408080004080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004448444804080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000408080004080;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000001d001d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000001d001d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000030003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000030003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000307;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000a0010400a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000a0010400a;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000598;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000598;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff00;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ffff0001ffff;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_d(__m256i_op0,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fffffff3fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fffffff3fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fff00003fff;
+  __m256i_out = __lasx_xvsrli_w(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fff3fff3fff3fc4;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fff3fff3fff3fc4;
+  __m256i_out = __lasx_xvsrli_h(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc5890a0a07070707;
+  *((unsigned long*)& __m256i_op1[2]) = 0x006be0e4180b8024;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1b399540334c966c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x71d7dd7aefcac001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe651bfff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe651bfff;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe0000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000bf6e0000c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000030000fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000bf6e0000c916;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000030000fff3;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff0e400;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1cfd000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000017e007ffe02;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_result[2]) = 0x6161616100000018;
+  *((unsigned long*)& __m256i_result[1]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_result[0]) = 0x6161616100000018;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f007f0081007f;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01ae00ff00ff;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fe1ffe0ffe1ffe0;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0007000700070007;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d20227a78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x132feeabd2d33b38;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000c0300000019a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0c08032100004044;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000265ffa5a6767;
+  *((unsigned long*)& __m256i_result[0]) = 0x0c08032100000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f433c78;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00feff0100feff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00feff0100feff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800300000000;
+  __m256i_out = __lasx_xvsra_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000017fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff000000017fff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsra_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000f00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000f00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsra_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbea2e127c046721f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1729c073816edebe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xde91f010000006f9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ef1f90efefaf30d;
+  *((unsigned long*)& __m256i_result[3]) = 0x515f93f023600fb9;
+  *((unsigned long*)& __m256i_result[2]) = 0x948b39e0b7405f6f;
+  *((unsigned long*)& __m256i_result[1]) = 0x48ef087800007c83;
+  *((unsigned long*)& __m256i_result[0]) = 0x78af877c7d7f86f9;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f7f7f7f7fff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f007f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f7f7f7f7fff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[2]) = 0xbfbfbfbfbfff807f;
+  *((unsigned long*)& __m256i_result[1]) = 0xbf803fbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[0]) = 0xbfbfbfbfbfff807f;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvrotri_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002a54290;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000007f0000007f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000007f0000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01ff80ff01ff80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff01ff800000007e;
+  *((unsigned long*)& __m256i_result[3]) = 0x003f8000003f8000;
+  *((unsigned long*)& __m256i_result[2]) = 0x003f8000003f8000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc07f80ffc07f80;
+  *((unsigned long*)& __m256i_result[0]) = 0xffc07f80003f0000;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x24);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff6f20;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff6f20;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x4343434343434343;
+  *((unsigned long*)& __m256i_result[2]) = 0x4343434343434343;
+  *((unsigned long*)& __m256i_result[1]) = 0x4343434343434343;
+  *((unsigned long*)& __m256i_result[0]) = 0x4343434343434343;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x38);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffdffd;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffdffd;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffdffd;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffdffd;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000f0000000f000;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000007fc00000400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000040000000400;
+  *((unsigned long*)& __m256i_result[1]) = 0x000007fc00000400;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000040000000400;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvrotri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_w(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00f7000000f70006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00f7000000f70006;
+  __m256i_out = __lasx_xvrotri_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_d(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffbfffffffb;
+  __m256i_out = __lasx_xvrotri_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007f00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x311d73ad3ec2064a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001fc000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000c475ceb40000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fb0819280000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe0ffe0ffe0ffe0;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfc00000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffbf4;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffc;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002a80000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002b0000003f800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002a80000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002b0000003f800;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256i_result[1]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01fe01fe01fe;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000003f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000003f0;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1076000016160000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1610000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1076000016160000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1610000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01fc03e000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x01fc03e000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff0000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffbff1ffffbff1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffbff1ffffbff1;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffeffc4000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffeffc4000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffeffc4000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffeffc4000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004040404000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffe06003fc000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffe06003fc000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256i_result[2]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256i_result[1]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256i_result[0]) = 0xfc00000000000048;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000feccfecc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000feccfecc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000007c8;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00fe01e000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00fe01e000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000086000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00040ff288000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000086000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00040ff288000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fe363637fe36364;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fe363637fe36364;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001ff8d8d90000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001ff8d8d90000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001ffe00000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001ffe00000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ffc8ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ffc8ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ff91ff100000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ff91ff100000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001180000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001180000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffc00fffffc00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffc00fffffc00;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_result[3]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_result[2]) = 0x3dc02b400a003400;
+  *((unsigned long*)& __m256i_result[1]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_result[0]) = 0x3dc02b400a003400;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff0fff0fff0fc00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff0fff0fff0fc00;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000a000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000a000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000400000004000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000fff8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000f0000000f;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000808081;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000808081;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000808081;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000808081;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff1ffca0011feca;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff1ffca0011feca;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fff7fff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff000000000000;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xefff000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xefff000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf000000000000000;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001000000010;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe1616161e1614e60;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xfbba01c0003f7e3f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfbd884e7003f7e3f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff874dc687870000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010183f95466;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01010101d58efe94;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff80ff80ff80ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80ff80ff80ff80;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00010003;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001f0000ffff;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffd8ffc7ffdaff8a;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000b0b100015d1e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe0001bfff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000b0b100015d1e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe0001bfff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00fe00ff00fe;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00001fff200007ef;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0100;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffa2beb040;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff10000fff10000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000080;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe27fe2821d226278;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe27fe2821d226278;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00f7000000f70007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00f7000000f70007;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00e9a80014ff0000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  int_out = __lasx_xbz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xbz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffe00fe00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000001fe01dde;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffe00fe00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000001fe01dde;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000a0008;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fff0ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff0ffc0;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffefff80;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xff808000ff808000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc3038000ff808000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff808000ff808000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc3038000ff808000;
+  int_out = __lasx_xbnz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff60000280;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000f64fab372db5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff60000280;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f64fab372db5;
+  int_out = __lasx_xbz_h(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff0000ffff;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff0000;
+  int_out = __lasx_xbnz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbz_d(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000001f4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000001f4;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xbnz_v(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000180000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000180000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  int_out = __lasx_xbz_w(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000f9bb562f56c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000f9bb562f56c80;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000f9bb562f56c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000f9bb562f56c80;
+  int_out = __lasx_xbnz_b(__m256i_op0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x43ef878780000009;
+  __m256i_out = __lasx_xvextl_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000201220001011c;
+  __m256i_out = __lasx_xvextl_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007f00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x311d73ad3ec2064a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001fc000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000c475ceb40000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fb0819280000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe0ffe0ffe0ffe0;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfc00000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffbf4;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffc;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002a80000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002b0000003f800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002a80000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002b0000003f800;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256i_result[1]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01fe01fe01fe;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000003f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000003f0;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1076000016160000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1610000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1076000016160000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1610000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01fc03e000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x01fc03e000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff0000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffbff1ffffbff1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffbff1ffffbff1;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffeffc4000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffeffc4000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffeffc4000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffeffc4000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004040404000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffe06003fc000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffe06003fc000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256i_result[2]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256i_result[1]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256i_result[0]) = 0xfc00000000000048;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000feccfecc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000feccfecc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000007c8;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00fe01e000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00fe01e000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000086000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00040ff288000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000086000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00040ff288000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fe363637fe36364;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fe363637fe36364;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001ff8d8d90000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001ff8d8d90000;
+  __m256i_out = __lasx_xvsllwil_d_w(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001ffe00000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001ffe00000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ffc8ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ffc8ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ff91ff100000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ff91ff100000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_wu_hu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001180000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001180000000;
+  __m256i_out = __lasx_xvsllwil_du_wu(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffc00fffffc00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffc00fffffc00;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_result[3]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_result[2]) = 0x3dc02b400a003400;
+  *((unsigned long*)& __m256i_result[1]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_result[0]) = 0x3dc02b400a003400;
+  __m256i_out = __lasx_xvsllwil_hu_bu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff0fff0fff0fc00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff0fff0fff0fc00;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsllwil_h_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000a000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000a000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000400000004000;
+  __m256i_out = __lasx_xvsllwil_w_h(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000083f95466;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010100005400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000083f95466;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010100005400;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvextl_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x40d74f979f99419f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x40d74f979f99419f;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff8080000004000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000080000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff8080000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000000;
+  __m256i_out = __lasx_xvsrlr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff0000000000080;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff0000000000000;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvsrlr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001020202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003ddd80007bbb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003ddd80007bbb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800000000000;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0ea85f60984a8555;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00a21ef3246995f3;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1189ce8000fa14ed;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0e459089665f40f3;
+  *((unsigned long*)& __m256i_result[3]) = 0x000100f800000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000f800000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000000000010;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f7f000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f7f000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100010001;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100000001000100;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000064;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000008;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffff80;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffff80;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000430207f944;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000038000000268;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000038000000268;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff010ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff010ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000201;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000201;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff01fb0408;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff01fb0408;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0xf2b180c9fc1fefdc;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0xff1cff1cff1c3fc7;
+  *((unsigned long*)& __m256i_result[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0xff1cff1cff1c3fc7;
+  *((unsigned long*)& __m256i_result[0]) = 0xff1cff1cff1cff1c;
+  __m256i_out = __lasx_xvsrlr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6b6b6b6b6b6b6b6b;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d6d6d;
+  __m256i_out = __lasx_xvsrlr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff01ff01ff01f010;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff01ff01ff01f010;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff01ff01ff01f010;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff01ff01ff01f010;
+  *((unsigned long*)& __m256i_result[3]) = 0x000078780000f0f1;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000078780000f0f1;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffc00040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffc00040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1080108010060002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1080108010060002;
+  __m256i_out = __lasx_xvsrlr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x38a966b31be83ee9;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5f6108dc25b80001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf41a56e8a20878d7;
+  *((unsigned long*)& __m256i_op0[0]) = 0x683b8b67e20c0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000501e99b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000109973de7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001020f22;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001890b7a39;
+  *((unsigned long*)& __m256i_result[3]) = 0x38a966b301f41ffd;
+  *((unsigned long*)& __m256i_result[2]) = 0x5f6108ee13ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf41a56e8d10201f6;
+  *((unsigned long*)& __m256i_result[0]) = 0x683b8b34f1020001;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000707;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010200000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000070300000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01480000052801a2;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffdcff64;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0008000001010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000001010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000001010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000001010000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0020000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0020000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01ff3400000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff83ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffcc8000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff82037dfd0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_result[2]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_result[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_result[0]) = 0x45baa7ef6a95a985;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000800;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d0000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000001a00;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fefffeff02ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000100;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00feff00000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2b2a292827262524;
+  *((unsigned long*)& __m256i_op1[2]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2b2a292827262524;
+  *((unsigned long*)& __m256i_op1[0]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8e8e8e8e8f0e8e8e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8e8e8e8e8f0e8e8e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7171717171010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x8e8e8e8e8f00ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7171717171010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x8e8e8e8e8f00ffff;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvsrar_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001607f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001607f0000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x43ef878780000009;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00005053000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00005053000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_result[3]) = 0x006018000000001a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0060401900000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x006018000000001a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0060401900000000;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrar_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000040404040;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ffffff1dff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff1dffffff1dff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ffffff1dff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff1dffffff1dff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff1dffffff1dff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff1dffffff1dff;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrar_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x33);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x28);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000505;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0002fffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0002ff7e8286;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff0002fffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0002ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202000002020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202000002010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0202000002020202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202000002020000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_result[3]) = 0x0703030307030203;
+  *((unsigned long*)& __m256i_result[2]) = 0x0703030307030203;
+  *((unsigned long*)& __m256i_result[1]) = 0x0703030307030203;
+  *((unsigned long*)& __m256i_result[0]) = 0x0703030307030203;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f3fc6c68787;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f3f87870000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003f3fc6c68787;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f3f87870000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010183f95466;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01010101d58efe94;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000101000083f95;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001010000d58f0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000101000001010;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010002000100020;
+  __m256i_out = __lasx_xvsrlri_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020000000200000;
+  __m256i_out = __lasx_xvsrlri_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000020000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000020000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x39);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000040000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000040000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000040000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000040000000000;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op0[2]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op0[0]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_result[2]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_result[0]) = 0x132feea900000000;
+  __m256i_out = __lasx_xvsrlri_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000038000000268;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000038000000268;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000001200000011a;
+  *((unsigned long*)& __m256i_result[2]) = 0x2040204020402040;
+  *((unsigned long*)& __m256i_result[1]) = 0x000001200000011a;
+  *((unsigned long*)& __m256i_result[0]) = 0x2040204020402040;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_w(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffa003e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffb009c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffa003e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffb009c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvsrlri_d(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000001;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020004000400040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020004000400040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020004000400040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020004000400040;
+  __m256i_out = __lasx_xvsrlri_h(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000800000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffbfffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffbfffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0102020202010202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0102020202010202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvsrlri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008000000000000;
+  __m256i_out = __lasx_xvsrlri_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x81f7f2599f0509c2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x51136d3c78388916;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffc0fcffffcf83;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000288a00003c1c;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00d6c1c830160048;
+  *((unsigned long*)& __m256i_op0[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe3aebaf4df958004;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100020001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fffffffffffe;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00001f41ffffbf00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x2a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00007dfd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00007dfd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x2a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x20fc000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x20fc000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007f0000007f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007f0000007f0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000003f8000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000003f8000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x10fbe1e2e0000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x10fbe1e2e0000002;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000040004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000040004;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff8000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0x26);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000400000004000;
+  __m256i_out = __lasx_xvsrari_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrari_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff81007fff0100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff81007fff0100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000008000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0003fffc0803fff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000008000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0003fffc0803fff8;
+  __m256i_out = __lasx_xvsrari_d(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00386a20b8aee1d8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00386a20b8aee1d8;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0008000001010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101000001010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x88888a6d0962002e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdb8a3109fe0f0020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000007fff01fffb;
+  *((unsigned long*)& __m256i_op0[0]) = 0xdb8e20990cce025a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff01ff3400000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff83ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0962002efe0f0020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01fffb8667012d;
+  __m256i_out = __lasx_xvsrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffeffeb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fb7afb62;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffeffeb;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fb7afb62;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffeffebfb7afb62;
+  __m256i_out = __lasx_xvsrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff017e6b803fc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff017e6b803fc0;
+  __m256i_out = __lasx_xvsrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000078100000064;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xa1a1a1a1a1a15e5e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xa1a1a1a1a1a15e5e;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1716151417161514;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1716151417161514;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1716151417161514;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1716151417161514;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0fff0fff0fff0fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0fff0fff0fff0fff;
+  __m256i_out = __lasx_xvsrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000013ffffffec;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000013ffffebd8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000013ffffffec;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000013ffffebd8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007ffe7ffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe7ffe7ffe8000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000807e7ffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8091811081118110;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80a6802680208015;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8091811081110013;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80a6802680200018;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffe0000feff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffeff0000007e7f;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000800000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000000000000;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000c8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000c8;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000440800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000440800;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3d3d3d3d3d3d3d3d;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000405;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000405;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfc01fc0101fe01dd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfc01fc0101fe01dd;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000055;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvsran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffc500000002d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000034;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbfa3e127c147721f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1729c173836edfbe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdf91f111808007fb;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5ff1f90ffffbf30f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff280016;
+  *((unsigned long*)& __m256i_result[2]) = 0xd193a30f94b9b7df;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000001001a;
+  *((unsigned long*)& __m256i_result[0]) = 0xc88840fdf887fd87;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000f;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffc5556aaa8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffc5556aaa8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000007070205;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002020100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000007070205;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002020100;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x36);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x73);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffe01fe01f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe01fe01f;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffe01fe01f;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe01fe01f;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fe01020b0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fe01020b0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0fff0fff00000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0fff0fff00000020;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op0[1]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01fb16ef98f97e90;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x01fb16ef98f97e90;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffa2078fffa2074;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffa2078fffa2074;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ff01ff01ff01ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x01ff01ff01ff01ff;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003e6c0000cb7a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003e6c0000cb7a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x40000000b000032d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x40000000b000032d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fc03fc01fc03fc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fc03fc01fc03fc;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x3e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ef0120;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ef0120;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff0120;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000e9ec0000e9ec;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff0120;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000e9ec0000e9ec;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffdd001dffe00020;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffdd001dffe00031;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffdd001dffe00020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffdd001dffe00031;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3ff73ff83ff73ff8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3ff73ff83ff73ff8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0600060000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0600060000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000007fff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000007fff8;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1e0000001e002000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1e0000001e002000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff3225;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff3225;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1a19181716151413;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2221201f1e1d1c1b;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1a19181716151413;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000004442403;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000004442403;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x63);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fef0000ffff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fef0000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xde00fe0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fe010000fe01;
+  *((unsigned long*)& __m256i_result[1]) = 0xde00fe0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fe010000fe01;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000007070707;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff07070707;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000007070707;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff07070707;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x03ff000003ff03ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x03ff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x03ff000003ff03ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x03ff000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x0007ffff0007ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0007ffff0007ffff;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x03802fc000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x03802fc000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x5a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x080808000828082f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0808080008280820;
+  *((unsigned long*)& __m256i_op0[1]) = 0x080808000828082f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0808080008280820;
+  *((unsigned long*)& __m256i_op1[3]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op1[2]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op1[0]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00828082f0808080;
+  *((unsigned long*)& __m256i_result[2]) = 0xf18181818132feea;
+  *((unsigned long*)& __m256i_result[1]) = 0x00828082f0808080;
+  *((unsigned long*)& __m256i_result[0]) = 0xf18181818132feea;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x24);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_w_d(__m256i_op0,__m256i_op1,0x39);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x43);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe01fe01fc01fc01;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe01fe01fc01fc01;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc01000000003fc0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc01000000003fc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_h_w(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000126000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2555205ea7bc4020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000126000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2555205ea7bc4020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op1[2]) = 0x10ffffff10000006;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op1[0]) = 0x10ffffff10000006;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000004980008;
+  *((unsigned long*)& __m256i_result[2]) = 0x003ffffffc400000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000004980008;
+  *((unsigned long*)& __m256i_result[0]) = 0x003ffffffc400000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x46);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00f0000000f00010;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0ff00fff0ff10;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00f0000000f00010;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0ff00fff0ff10;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0087ff87f807ff87;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0087ff87f807ff87;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x68);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_b_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x50);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000050005;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf007fe76f008fe19;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf08aff01f07cc291;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf007fe76f008fe19;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf08aff01f07cc291;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000001400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000003c01ff9;
+  __m256i_out = __lasx_xvsrlni_d_q(__m256i_op0,__m256i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000003ffffffff;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe1e800002f03988d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe1e800002f03988d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff0f400001781cc4;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff0f400001781cc4;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c4c5c5c5c5c5c5;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc5c545c545c545c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c4c5c5c5c5c5c5;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc5c545c545c545c5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000ff000000f8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbc8ff0ffffffcff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff000000f8;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbc8ff0ffffffcff8;
+  *((unsigned long*)& __m256i_result[3]) = 0xfcfcfcfcfc040404;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fbfffffc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfcfcfcfcfc040404;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fbfffffc;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x14131211100f0e0d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0c0b0a0908070605;
+  *((unsigned long*)& __m256i_op0[1]) = 0x14131211100f0e0d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0c0b0a0908070605;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0a09080706050403;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0a09080706050403;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000080;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x40);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000242;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000242;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0707feb608c9328b;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc237bd65fc892985;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0707feb608c9328b;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc237bd65fc892985;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00150015003a402f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x333568ce26dcd055;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00150015003a402f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x333568ce26dcd055;
+  *((unsigned long*)& __m256i_result[3]) = 0x0e0f1192846ff912;
+  *((unsigned long*)& __m256i_result[2]) = 0x002a0074666a4db9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0e0f1192846ff912;
+  *((unsigned long*)& __m256i_result[0]) = 0x002a0074666a4db9;
+  __m256i_out = __lasx_xvsrani_h_w(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffdfffffffdff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffdfffffffdff;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x37);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8080808000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8080808000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3f7f7f7eff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3f7f7f7eff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007efeff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007efeff00;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffff3e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffff3e;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x70);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002000200020018;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002000200020008;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00c0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0040000000000000;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrani_h_w(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000f0f0003;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000f1003;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fefefe000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fefefe000000;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010101010;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x0008080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000003c;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x45);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f3009500db00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f3009500db00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000003cc0;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x6a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000400100013;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000400100014;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000400100013;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020200000202;
+  *((unsigned long*)& __m256i_result[2]) = 0x4100004141410000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020200000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4100004141410000;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000003;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000956a00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000956a00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xb500000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xb500000000000000;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x29);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000001010100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000405;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000001010100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000405;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe00000ffe00000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe00000ffe00000;
+  __m256i_out = __lasx_xvsrani_h_w(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_w_d(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_w_d(__m256i_op0,__m256i_op1,0x34);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffc0;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffff80;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_b_h(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_h_w(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x6b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000040e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000040e7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000200000000000;
+  __m256i_out = __lasx_xvsrani_d_q(__m256i_op0,__m256i_op1,0x21);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3ff9fffa3ff9fffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3ff9fffa3ff9fffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007ff3;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007ff3;
+  __m256i_out = __lasx_xvsrani_w_d(__m256i_op0,__m256i_op1,0x2f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6651bfff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202020201010000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000050005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000505;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff820002ff820002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff820002ff820002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00020002ff820002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00020002ff820002;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00020421d7d41124;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00020421d7d41124;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff000200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff000200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff020000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff020000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe01fe0000ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01fe0000ff01;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xf9f9f9f900000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xf9f9f9f900000002;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00043fff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00043fff00000000;
+  __m256i_out = __lasx_xvsrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff1cff1b00e300e4;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff1cff1b00e300e4;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff1cff1b00e300e4;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff1cff1b00e30100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x002000000020ffff;
+  __m256i_out = __lasx_xvsrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffdbff980038ffaf;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffafffe80004fff1;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffdbff980038ffaf;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffafffe80004fff1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000020202020202;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000020202020202;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000e3fec0004fff1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000e3fec0004fff1;
+  __m256i_out = __lasx_xvsrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ff00007fff9fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0209fefb08140000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00080000000cc916;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000006fff3;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ff00007fff9fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ffff00ff000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00080005c073c916;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000100000007fff3;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00050008000e0010;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0007000800100010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00050008000e0010;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0007000800100010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000002affaa;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff002affaa;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000002affaa;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffd50055;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x002affaa00000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001f0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00007f7f00007f00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000007fff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000007fff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f00ff00000000;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000abff0000abff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000abff0000abff;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff800000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000070007000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040403fd03fd040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040403fd03fd040;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffd03fd040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040403fd03fd040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001010000010100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000010100;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000c8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000c8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000086000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00040ff288000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000086000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00040ff288000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op1[1]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fc300000fc40;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc008fa01c0090000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3f804000c008f404;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc008fa01c0090000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3f804000c008f404;
+  *((unsigned long*)& __m256i_op1[3]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc0090000c0200060;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc0090000c0200060;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf3f3f3f3f3f3f4f3;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf3f3f3f3f3f3f4f3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000f3f3f4f3;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000f3f3f4f3;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff8579f;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfefefefe01010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfefefefe01010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefe01010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefe01010101;
+  __m256i_out = __lasx_xvsrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000810001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000810001;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010110;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010110;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000104000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000104000200;
+  __m256i_out = __lasx_xvsrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x7a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100010001000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000808000008080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000808000008081;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000081;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x68);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000801380f380fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000801380f300fb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000007f3a40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x42);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x56);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf0000000f0000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf0000000f0000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_h_w(__m256i_op0,__m256i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000004fc480040;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000004fc480040;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000004fc480040;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000004fc480040;
+  __m256i_out = __lasx_xvsrlrni_h_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004000404040404;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004000400000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000004;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_h_w(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x80208020c22080a7;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80208020c22080a7;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdf80ff20df80ff20;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdfc2ff20df80ffa7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdf80ff20df80ff20;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdfc2ff20df80ffa7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000840100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbffebffec0febfff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000840100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbffebffec0febfff;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffc0c0ffffbfc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffc0c0ffffbfc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f3f0000400d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f3f0000400d;
+  *((unsigned long*)& __m256i_result[3]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x44);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbfffa004fffd8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbfffa004fffd8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003f0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00002fffe8013fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003f0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00002fffe8013fff;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000101000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000101000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x5a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00b2fe28e4420609;
+  *((unsigned long*)& __m256i_op0[2]) = 0x028da7fe15020000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00b2fe28e4420609;
+  *((unsigned long*)& __m256i_op0[0]) = 0x028da7fe15020000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000598;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000598;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0x6d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000800000010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000800000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002000000;
+  __m256i_out = __lasx_xvsrlrni_d_q(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000003ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000003ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_w_d(__m256i_op0,__m256i_op1,0x3c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0040000000000000;
+  __m256i_out = __lasx_xvsrlrni_w_d(__m256i_op0,__m256i_op1,0x2a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0fff0fff0fc00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0fff0fff0fc00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f880f87e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f880f87e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_xvsrlrni_h_w(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000081220000812c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000812000008120;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000081220000812c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000812000008120;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefefefefefe;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvsrlrni_b_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000007f007f5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x002e4db200000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000315ac0000d658;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00735278007cf94c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003ed8800031b38;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x3d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x3d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff8fffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff8fc000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7ff77fff7ff7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7ff77fff7ff7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000022;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000022;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000004;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x3e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000016600000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000016600000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x7f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000055;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000055;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x50);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x2f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x20);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00550f0000550f00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000015c015c0;
+  *((unsigned long*)& __m256i_result[2]) = 0xc0c0c0cdc0c0c0cd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc0c0c0cdc0c0c0cd;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0003030300000300;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0003030300000300;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0003030300000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003030300000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x00f800f800f800f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0018181800181818;
+  *((unsigned long*)& __m256i_result[1]) = 0x00f800f800f800f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0018181800181818;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x43d03bfff827ea21;
+  *((unsigned long*)& __m256i_op1[2]) = 0x43dac1f2a3804ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x43d03bfff827e9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43e019c657c7d050;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xe8001411edf9c0f8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xe80014fdf0e3e428;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0fff0ff01ff14;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0fff0fff10003;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0fff0ff01ff14;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0fff0fff10003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefee0e3fefefe00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefee0e3fefefe00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000001fffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000001fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000001fffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000001fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x007f0000007f0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x007f0000007f0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f8f7f8f800f800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f780000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f8f7f80000fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f780000ff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8e8e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8e8e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x01c601c6fe3afe3a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x01c601c6fe3afe3a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f3f00004040;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f010700c70106;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f010700c70106;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000010211921;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000010211921;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000001;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_w_d(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000080ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000080ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x08000000000000f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x08000000000000f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010101010;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff8;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020000000000000;
+  __m256i_out = __lasx_xvsrarni_d_q(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x4);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_b_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op0[2]) = 0x03acfc5303260e81;
+  *((unsigned long*)& __m256i_op0[1]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op0[0]) = 0x03acfc5303260e81;
+  *((unsigned long*)& __m256i_op1[3]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op1[2]) = 0x03acfc5303260e81;
+  *((unsigned long*)& __m256i_op1[1]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op1[0]) = 0x03acfc5303260e81;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsrarni_h_w(__m256i_op0,__m256i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x44bb2cd3a35c2fd0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xca355ba46a95e31c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000100ab000500a0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000200b800080124;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001011b000200aa;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00150118008f0091;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f057f0b7f5b007f;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000020000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000020000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007f00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7ffe7fffeffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffd84900000849;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07fffc670800f086;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001700080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001700080;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256i_op0[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256i_op0[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000060102150101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000060102150101;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000003f0000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f7f7f0000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff00000089;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffff600000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff000009ec;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffff600000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff000009ec;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8060000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8060000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000010000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00000001;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff81ff7dffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff81ff7dffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f7f7f7f7f017ffd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f7f7f7f7f017ffd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000007;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff00000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04e8296f08181818;
+  *((unsigned long*)& __m256i_op0[2]) = 0x032feea900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x04e8296f08181818;
+  *((unsigned long*)& __m256i_op0[0]) = 0x032feea900000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff0000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000013fc03bbc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000013fc03bbc;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f03030000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf80df80df80dfff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffdf80dfff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000017fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000017fff;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff000000017fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000017fff;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff010100000001;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007ffe81fdfe03;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ffe800000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffef000004ea;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffe81;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f007f78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000033007e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000021;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f00007fff;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000080;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000002aaad555;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000002aaad555;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff00000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcfee0fe00ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffcfee0fe00ffe0;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff2400000000ff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffeffe4fffeff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff6400000000ff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffeff66fffeff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000100da000100fd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001ffe20001fefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001009a000100fd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001ff640001fefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fe0100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fe0100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000016000000480d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000016000000480d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1131288800000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1131288800000002;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f3f7f007f1f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f3f7f007f1f;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000007ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001010800;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0008;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe27fe2821d226278;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe27fe2821d226278;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff000000000080;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007f7f;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff1cff18;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff1cff18;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe013fcf2e015fc38;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe013fd00dff78420;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe013fcf2e015fc38;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe013fd00dff78420;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000003fffc0;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x44bb2cd3a35c2fd0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xca355ba46a95e31c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000100ab000500a0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000200b800080124;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001011b000200aa;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00150118008f0091;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f057f0b7f5b007f;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000020000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000020000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007f00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7ffe7fffeffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffd84900000849;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07fffc670800f086;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001700080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001700080;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256i_op0[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256i_op0[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000060102150101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000060102150101;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000003f0000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f7f7f0000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff00000089;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffff600000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff000009ec;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffff600000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff000009ec;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8060000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8060000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000010000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00000001;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff81ff7dffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff81ff7dffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f7f7f7f7f017ffd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f7f7f7f7f017ffd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000007;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff00000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04e8296f08181818;
+  *((unsigned long*)& __m256i_op0[2]) = 0x032feea900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x04e8296f08181818;
+  *((unsigned long*)& __m256i_op0[0]) = 0x032feea900000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff0000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000013fc03bbc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000013fc03bbc;
+  __m256i_out = __lasx_xvssrln_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f03030000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf80df80df80dfff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffdf80dfff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000017fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000017fff;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff000000017fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000017fff;
+  __m256i_out = __lasx_xvssrln_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrln_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff010100000001;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrln_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007ffe81fdfe03;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ffe800000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffef000004ea;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffe81;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f007f78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000033007e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000021;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f00007fff;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000080;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000002aaad555;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000002aaad555;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff00000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcfee0fe00ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffcfee0fe00ffe0;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff2400000000ff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffeffe4fffeff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff6400000000ff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffeff66fffeff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000100da000100fd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001ffe20001fefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001009a000100fd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001ff640001fefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fe0100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fe0100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000016000000480d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000016000000480d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1131288800000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1131288800000002;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f3f7f007f1f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f3f7f007f1f;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000007ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001010800;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0008;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe27fe2821d226278;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe27fe2821d226278;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff000000000080;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007f7f;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff1cff18;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff1cff18;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssran_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe013fcf2e015fc38;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe013fd00dff78420;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe013fcf2e015fc38;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe013fd00dff78420;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssran_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000003fffc0;
+  __m256i_out = __lasx_xvssran_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f7f7f7f00007f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3f28306860663e60;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x40d74f979f99419f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff7fff7fff;
+  __m256i_out = __lasx_xvssrlni_h_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe0ffe0ffe0ffe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1e1800001e180000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1e1800001e180000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000001e18;
+  __m256i_out = __lasx_xvssrlni_du_q(__m256i_op0,__m256i_op1,0x70);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[3]) = 0x1fffffff1fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0383634303836343;
+  *((unsigned long*)& __m256i_result[1]) = 0x1fffffff1fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0383634303836343;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_du_q(__m256i_op0,__m256i_op1,0x68);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x6c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000bf6e0000c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000030000fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001175f10e4330e8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff8f0842ff29211e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffff8d9ffa7103d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000e00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff00ff;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00001000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00001000;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x39);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6aeaeaeaeaeaeaea;
+  *((unsigned long*)& __m256i_op1[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6aeaeaeaeaeaeaea;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000003f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000003f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffc0000fee0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fe000000ffe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff900000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ffe00007f000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ffe00007f000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe17cec8fe08008ac;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe0801f41e0800168;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9240f24a84b18025;
+  *((unsigned long*)& __m256i_op1[2]) = 0x9240f24a84b18025;
+  *((unsigned long*)& __m256i_op1[1]) = 0xb2c0b341807f8006;
+  *((unsigned long*)& __m256i_op1[0]) = 0xb2c0b341807f8006;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000012481e4950;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001658166830;
+  __m256i_out = __lasx_xvssrlni_du_q(__m256i_op0,__m256i_op1,0x5b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x77777777f7777777;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf777777777777777;
+  *((unsigned long*)& __m256i_op0[1]) = 0x77777777f7777777;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf777777777777777;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff24;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff24;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000003;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040404240;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040404240;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000040404240;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000040404240;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f00007f7f;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_h_w(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00010001000c4411;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100044411;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002800000010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002800000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000200020018;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000200020008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_h_w(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000c0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000c0000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000040000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003030300000300;
+  *((unsigned long*)& __m256i_result[2]) = 0x0003030300000300;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003030300000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0003030300000100;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000800000;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fff00003fff;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f00ff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000030007;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007f7f817f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007f7f817f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4ffc3f783fc040c0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3fc03f803fc040c0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4ffc3f783fc040c0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3fc03f803fc040c0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003fbfc0bfbfc03;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003fbfc0bfbfc03;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlni_du_q(__m256i_op0,__m256i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff56ff55ff01ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff56ff55ff01ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f7f7f7f;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xa90896a400000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa90896a400000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f7f000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f7f000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f7f7f7f7f7f7f;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff80017fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff80017fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000000;
+  __m256i_out = __lasx_xvssrlni_du_q(__m256i_op0,__m256i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ff810011;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ff810011;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff8180ffff8181;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff8180ffff8181;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000008000ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff81ff81;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000008000ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff81ff81;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffebeeaaefafb;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffebeeaaeeeeb;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffebeeaaefafb;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffebeeaaeeeeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ffbfff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x03ffffff03ffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ffbfff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x03ffffff03ffffff;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x001f001f001f001f;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x61);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0200000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0200000000000000;
+  __m256i_out = __lasx_xvssrlni_du_q(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000003030000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000030400;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007000008e700000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007000008e700000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7171717171010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8e8e8e8e8f00ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7171717171010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8e8e8e8e8f00ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xe2e2e202ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xe2e2e202ffffffff;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000e0010000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000e0010000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x4e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0x38);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01ff01ff01c0003e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01ff01ff01c0003e;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0707070707070707;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0707070707070707;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3000300030003000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3000300030003000;
+  __m256i_out = __lasx_xvssrlni_h_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_wu_d(__m256i_op0,__m256i_op1,0x35);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000598;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000598;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002cc0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002cc0000;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x31);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f1d7f7f7f1d7f3b;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f1d7f7f7f1d7f3b;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202010202020102;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000dfffffff1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000cfffffff3;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000dfffffff1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000cfffffff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004000;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x31);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004000000080;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000118;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000118;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_w_d(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007efffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff80fffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007efffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80fffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000e3ab0001352b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000e3ab0001352b;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff00007fff0000;
+  __m256i_out = __lasx_xvssrlni_h_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_bu_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_b_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001fffffffe00011;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001fffffffe00011;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvssrlni_d_q(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlni_hu_w(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f057f0b7f5b007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000007f007f5;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fc000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000c475ceb40000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fb0819280000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x074132a240000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003a0200;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007fff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007fff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x37);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff8001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffff8001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff0ffff0000;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001ffffff;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x73);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0100010001000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0100010001000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000400040004;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f80780000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x6b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xce7ffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xce7ffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff39ffffff;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x5e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe8001b72e0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xb72e8001b72eaf12;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe000247639d9c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb5308001b72eaf12;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00001fff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00001fff;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x38f7414938f7882f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x38f7414938f78830;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000801380f380fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000801380f300fb;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0303030303020000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0303030303020000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0x31);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x4d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x59);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd04752cdd5543b56;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6906e68064f3d78b;
+  *((unsigned long*)& __m256i_op0[1]) = 0xd04752cdd5543b56;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6906e68064f3d78b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000004560420;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000004560420;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ffff00ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000fff00004542;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ffff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000fff00004542;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00c0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000c0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000040000000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffffe02;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000300000005fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffff02;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000300000005fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0007fd00000f02ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001fffeff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ffffffff00;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000002000000018;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002000000019;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000200000001e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002000000019;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004000000030000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000400000003c000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x33);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009c3e201e39e7e3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x87c1135043408bba;
+  *((unsigned long*)& __m256i_op0[1]) = 0x009c3e201e39e7e3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x87c1135043408bba;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f7f7f5c8f374980;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f7f7f5c8f374980;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100007f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100007f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x39);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007c7fff00007fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00817fff00810000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007c7fff00007fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00817fff00810000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x7c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000457d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000b03f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000457d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000b03f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x2000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0f000f000f000f00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0f000f000f000f00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007fc0083fc7c007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007fc0083fc7c007;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x42);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00067fff00047fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00027fff000080fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00067fff00047fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00027fff000080fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x067f047f027f0080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x067f047f027f0080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0af57272788754ab;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000005e80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0af57272788754ab;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000005e80;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000f0f0f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f0000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000f0f0f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f0000007f;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100000000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x4b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0400100004001000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0400100004001000;
+  __m256i_out = __lasx_xvssrani_hu_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256i_op0[2]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256i_op0[0]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000700000008;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000700000008;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x55);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc07f8000c07f8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc07f8000c07f8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fff01fe0;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fff01fe0;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x2a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fe96fe95;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6afc01000001ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fe96fe95;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6afc01000001ff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010000ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000010000ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x7e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000040404000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000040404000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000404;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000404;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4000400040004000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000400040004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x4000400040004000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000400040004000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000020202000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000020202000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x3d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000001ff1;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000001ff1;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x53);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x03fbfffc03fc07fc;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x03fbfffc03fc07fc;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff80000000;
+  __m256i_out = __lasx_xvssrani_w_d(__m256i_op0,__m256i_op1,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff003fffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffc00fffffc00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffc00fffffc00;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[2]) = 0xc03fc03fc03fc03f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[0]) = 0xc03fc03fc03fc03f;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000ff;
+  __m256i_out = __lasx_xvssrani_b_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_wu_d(__m256i_op0,__m256i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x6c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrani_h_w(__m256i_op0,__m256i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000005;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x60);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrani_du_q(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrani_bu_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000f;
+  __m256i_out = __lasx_xvssrani_d_q(__m256i_op0,__m256i_op1,0x6c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf800f800f800c000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf800f800f800a000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf800f800f800c000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf800f800f800a000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_op1[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op1[1]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_op1[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffc0000fffc0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000200020002;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff0004ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff0004ff;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000005be55bd2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0404ffff00000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0404040800000010;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007f00f8ff7fff80;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fff6a9d8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007f00f8ff7fff80;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff6a9d8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000019;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000019;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000070700000707;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000009091b1b1212;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000070700000707;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000009091b1b1212;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000027d00f8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x040204660265fe22;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000027d00f8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x040204660265fe22;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe273e273e273e273;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe273e273e273e273;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe273e273e273e273;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe273e273e273e273;
+  *((unsigned long*)& __m256i_op1[3]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op1[1]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001c4e8ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001c4e8ffffffff;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff0000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff0000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f0200007f02;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f0200007f02;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0097011900f4009f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003200d4010f0144;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0097011900f301cd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x010b008800f80153;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3fff8000ffa08004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3fff8000ffa08004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff01;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffbfffa0ffffff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffbfffa0ffffff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff02000000;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00020001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00020001;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff0000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff0000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000101000001010;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000404;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000404;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000000;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000200a000020020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000200a000020020;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1c3fc7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1c3fc7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvssrlrn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000200000000;
+  __m256i_out = __lasx_xvssrlrn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000017f7f7f7f;
+  __m256i_out = __lasx_xvssrlrn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001001900010019;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0a02041904010019;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001001900010019;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0a02041904010019;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007b007e;
+  __m256i_out = __lasx_xvssrlrn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000017ffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000100da000100fd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001ffe20001fefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001009a000100fd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001ff640001fefd;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000100da000100fd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001ffe20001fefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001009a000100fd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001ff640001fefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007ff90000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000001ff60000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000001;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc2c2ffffc2c2;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffc2c2ffffc2c2;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc2c2ffffc2c2;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffc2c2ffffc2c2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x003100310031002f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x003100310031002f;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffefffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000000000002;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffff6f20;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000781e0000f221;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff6f20;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000781e0000f221;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00007f7f80007fa3;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007f670000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00007f7f80007fa3;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007f670000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000008;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_hu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000800400010006d;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0200000002000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x02000000fdffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0200000002000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x02000000fdffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000004ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000004ffffffff;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff000000ff000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff000000ff000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffb6811fffff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff97c120000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffb6811fffff80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff97c120000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xdb410010cbe10010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xdb410010cbe10010;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000019ffdf403;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000011ffd97c3;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000019ffdf403;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000011ffd97c3;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x002000000020ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004000000040;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3838383838383838;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffdfffffe00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3838383838383838;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffdfffffe00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvssrarn_wu_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000020002000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000020002000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_h_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffbffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffbffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_bu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000001ec020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000001ec020;
+  __m256i_out = __lasx_xvssrarn_w_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarn_b_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3133c6409eecf8b0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xddf50db3c617a115;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa432ea5a0913dc8e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x29d403af367b4545;
+  *((unsigned long*)& __m256i_op1[3]) = 0x38a966b31be83ee9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5f6108dc25b8e028;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf41a56e8a20878d7;
+  *((unsigned long*)& __m256i_op1[0]) = 0x683b8b67e20c8ee5;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe06df0d7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x988eb37e000fb33d;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffed95be394b1e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000ffff8000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x06f880008000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x800080008000b8f1;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff00ff00;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000040100000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040100000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000040100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040100000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0080200000802000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080200000802000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000f18080010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000f18080010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000808080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x7c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000020afefb1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f350104f7ebffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000003fffc1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x005c0003fff9ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000fe6a021;
+  *((unsigned long*)& __m256i_result[1]) = 0x2000000020000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000b8000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000020001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x2e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020000000000000;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x4b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x33);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000002020000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000201eff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000002020000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001fef010;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000000000;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff00000000;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0x29);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0e0f1192846ff912;
+  *((unsigned long*)& __m256i_op0[2]) = 0x002a0074666a4db9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0e0f1192846ff912;
+  *((unsigned long*)& __m256i_op0[0]) = 0x002a0074666a4db9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000018;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000018;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0408040800000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0408040800000004;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[2]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[0]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001fbfbfc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001fbfbfc;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0x62);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fe01020b0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fe01020b0001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000404040;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0x68);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000010486048c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000010486048c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x6f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808000800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808000000;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd010101010101010;
+  *((unsigned long*)& __m256i_op0[2]) = 0xd010101010103218;
+  *((unsigned long*)& __m256i_op0[1]) = 0xd010101010101010;
+  *((unsigned long*)& __m256i_op0[0]) = 0xd010101010103218;
+  *((unsigned long*)& __m256i_op1[3]) = 0xd010101010101010;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd010101010103218;
+  *((unsigned long*)& __m256i_op1[1]) = 0xd010101010101010;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd010101010103218;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ff8000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ff8000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020000000200000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x2b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbc30c40108a45423;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbc263e0e5d00e69f;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbc30c40108a4544b;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbc20e63aa8b9663f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrni_hu_w(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0504080804030405;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0504060904040305;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0504080804030405;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0504060904040305;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000141020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000141020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080000000800;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x35);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000010101010;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000010101010;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010101010;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010101010001000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010101000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000008d00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000008d00000000;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_du_q(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fe70000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fe70000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000003;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x7e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256i_result[2]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256i_result[0]) = 0x001fe020001fe020;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_hu_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000002000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000002000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x38);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000004;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x7e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_hu_w(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f010100000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f010100000101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008000000000010;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvssrlrni_h_w(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000008002d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000008002d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000000000;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffbfff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3f7f7f7f407fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3f7f7f7f407fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000fdfdfe;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x07ffffff07ffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07ffffff08000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07ffffff08000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x207f207f207f2000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000207f2000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb68380002001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c08000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb68380002001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c08000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000007fff5b41c0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000007fff5b41d0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000007fff5b41c0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000007fff5b41d0;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x59);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvssrlrni_w_d(__m256i_op0,__m256i_op1,0x3c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_d_q(__m256i_op0,__m256i_op1,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00c00040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000008000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00c00040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000008000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_bu_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002000200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000020002000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000020002000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000008000000080;
+  __m256i_out = __lasx_xvssrlrni_wu_d(__m256i_op0,__m256i_op1,0x39);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrlrni_hu_w(__m256i_op0,__m256i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f7f7f7f7f;
+  __m256i_out = __lasx_xvssrlrni_b_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffc00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000020000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000020000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000f20;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000009f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00001f41ffffbf00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x2b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000010000000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x5d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf5f5bfbaf5f5bfbe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf5f0bfb8f5d8bfe8;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf5f5bfbaf5f5bfbe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf5f0bfb8f5d8bfe8;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf5f5bfbaf5f5bfbe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf5f0bfb8f5d8bfe8;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf5f5bfbaf5f5bfbe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf5f0bfb8f5d8bfe8;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff5f5c;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x6c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op0[2]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op0[0]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op1[3]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op1[2]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op1[1]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op1[0]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffff6ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffff6ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x28);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a09080706050403;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a09080706050403;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003000200000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003000200000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x30);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001010300010102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000410041;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000df93f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000077843;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000003800000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x73);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8001b72e0001b72e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8001b72eaf12d5f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000247639d9cb530;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8001b72eaf12d5f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffe056fd9d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffceba70;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00150015003a402f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x333568ce26dcd055;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00150015003a402f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x333568ce26dcd055;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000007d0d0d0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000007d0d0d0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800000098;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000040000ffca;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000800000098;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000040000ff79;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff04ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff04ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000008000000a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000008000000a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x44);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000120e120d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000120e120d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x32);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x2b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffffe;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x3e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x2a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffe000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffe000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x54);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00030006fa05f20e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00030081bd80f90e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000018;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000018;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_w_d(__m256i_op0,__m256i_op1,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x02407a3c00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0d0cf2f30d0cf2f3;
+  *((unsigned long*)& __m256i_op0[1]) = 0x02407a3c00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d0cf2f30d0cf2f3;
+  *((unsigned long*)& __m256i_op1[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op1[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ff0fff0fff0f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff0fff0fff0f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffff70156;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x74);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x2c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xde00fe0000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fe010000fe01;
+  *((unsigned long*)& __m256i_op0[1]) = 0xde00fe0000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fe010000fe01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x79);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000070007000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0e0e0e0e0e0e0e0e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000e0e0e0e0e0e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xa1a1a1a1a1a15e5e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xa1a1a1a1a1a15e5e;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fe000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x2b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x45);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001fffe0001fffa;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001fffe00018069;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001fffe0001fffa;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001fffe00018069;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000002000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000002000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x64);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000004000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000004000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00000000ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00000000ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000038000000268;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000038000000268;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001010101;
+  __m256i_out = __lasx_xvssrarni_bu_h(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0400000004000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_result[1]) = 0x0400000004000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000400;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x5b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x08000000000000f8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x08000000000000f8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0200000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0200000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x2000000000000000;
+  __m256i_out = __lasx_xvssrarni_wu_d(__m256i_op0,__m256i_op1,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x6a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x36);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarni_hu_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000040000001b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000040000001b;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f80ffffff808000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f80ffffff808000;
+  __m256i_out = __lasx_xvssrarni_b_h(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001e00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssrarni_h_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000500020002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000700020033;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000500020002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000700020033;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000500020002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000700020033;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000500020002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000700020033;
+  *((unsigned long*)& __m256i_result[3]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1400080008000000;
+  __m256i_out = __lasx_xvssrarni_d_q(__m256i_op0,__m256i_op1,0x26);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000001c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000001de;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000001c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000001de;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000060000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000060000000;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x44);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00003fea0014734d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fe900140d85;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00003fea0014734d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fe900140d85;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff0000ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff0000ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssrarni_du_q(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffd1b24e00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcea54ffff29a8;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff8cad88ff8306b4;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffc1278fffce4c8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0802010000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0806030008080001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0801010108010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0806000008060302;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfafafafafafafafa;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000fefefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0fff0fff00000020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0fff0fff00000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fc03fc01fc03fc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fc03fc01fc03fc;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000200000001e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000200000001e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000808;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd04752cdd5543b56;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6906e68064f3d78b;
+  *((unsigned long*)& __m256i_op0[1]) = 0xd04752cdd5543b56;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6906e68064f3d78b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000300000002;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc0000000c0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc000000080400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc0000000c0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc000000080400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000000010000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010000100000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000004000000020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000004000000020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000100010;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclo_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04481940fbb7e6bf;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf2781966e6991966;
+  *((unsigned long*)& __m256i_op0[1]) = 0x51258839aeda77c6;
+  *((unsigned long*)& __m256i_op0[0]) = 0xcf25f0e00f1ff0e0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0501030100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001030100000301;
+  *((unsigned long*)& __m256i_result[1]) = 0x0102000200000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000004030000;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000f0000000f;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000900000000;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvclz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080807;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080807;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100001;
+  __m256i_out = __lasx_xvclz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008000000080000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008000000080000;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvclz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000018;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000019;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000200000001e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000019;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0b085bfc00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0b004bc000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0b085bfc00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0b004bc000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0404010008080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0408010008080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0404010008080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0408010008080808;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000012;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0404010008080808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0408010008080808;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0404010008080808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0408010008080808;
+  *((unsigned long*)& __m256i_result[3]) = 0x0505070804040404;
+  *((unsigned long*)& __m256i_result[2]) = 0x0504070804040404;
+  *((unsigned long*)& __m256i_result[1]) = 0x0505070804040404;
+  *((unsigned long*)& __m256i_result[0]) = 0x0504070804040404;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002ffff0000ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002ffff0000ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000e;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000032;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000003c000000032;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x001000100010000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x001000060010000a;
+  __m256i_out = __lasx_xvclz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000c;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000008080800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000008080800;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004001000100004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004000400100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004001000100004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000400100010;
+  __m256i_out = __lasx_xvclz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000020;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007f8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000007f8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000029;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000029;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvclz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000027;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvclz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvclz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x639c3fffb5dffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb8c7800094400001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000e000c000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0009000100040001;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op0[2]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_op0[0]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004000400040805;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004000400040805;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004000400040805;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000400040805;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcf800fffcf800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0008000800000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0806050008060500;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000800000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000000100;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002e2100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000040002;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x34000000fff00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff6e00000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3380000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x363c0000fff3c000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000030000000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000500000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000010;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00c100c100c100c1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00c100c100c100c1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080800000808;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100000100000001;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000020;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000001555;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000015554001c003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000001555;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000015554001c003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000304;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000030401010202;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000304;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000030401010202;
+  __m256i_out = __lasx_xvpcnt_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000a0008;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000030000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000030000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000030000;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000040000001b;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000040000001b;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000b000b000b000b;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000b000b000b000b;
+  __m256i_out = __lasx_xvpcnt_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001f00000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001f00000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001200000012;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvpcnt_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffff1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffeff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffff1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffeff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000105fffffefb;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff02000000fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000105fffffefb;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff02000000fe;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7ffffffffffff1f;
+  *((unsigned long*)& __m256i_result[2]) = 0xbffffffffffffeff;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7ffffffffffff1f;
+  *((unsigned long*)& __m256i_result[0]) = 0xbffffffffffffeff;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff7fff7fffdefd;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvbitclr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002555400000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002555400000000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000002a542a;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000200020002;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ffff00ff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000fff00004542;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ffff00ff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000fff00004542;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ffff00ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000fff00004542;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ffff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000fff00004542;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00fe00feff02fe;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00fe00feff027f;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00fe00feff02fe;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00fe00feff027f;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000023a20000a121;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000179e0000951d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000023a20000a121;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000179e0000951d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000000100;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000236200005111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000175e0000490d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000236200005111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000175e0000490d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffeeffaf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffeeffaf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000165e0000480d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000165e0000480d;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007fee;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fefe7f00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fefe7f00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff00000000;
+  __m256i_out = __lasx_xvbitclr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffefffe00000000;
+  __m256i_out = __lasx_xvbitclr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fe70000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fe70000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fe70000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fe70000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007f7f80007fa3;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f670000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f7f80007fa3;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f670000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffeffff10000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffeffff10000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ffffffffffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ffffffffffffffe;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3f8000003f800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3e8000003e800000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3e8000003e800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3e8000003e800000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3e8000003e800000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00001ff8d8d90000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00001ff8d8d90000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001ef8d8d8c000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001ef8d8d80000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001ef8d8d8c000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001ef8d8d80000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fffe0000000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffe0000000c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000003;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefee00000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefee00000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fff000000000;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ffff88ff88;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ff007f007f00;
+  __m256i_out = __lasx_xvbitclr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffeffffff00;
+  __m256i_out = __lasx_xvbitclr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff000000010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000095120000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc9da000063f50000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc7387fff6bbfffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffdffffffc81aca;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff3a0b9512;
+  *((unsigned long*)& __m256i_op1[1]) = 0x280bc9db313a63f5;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe032c738adcb6bbb;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800001010400;
+  *((unsigned long*)& __m256i_result[2]) = 0x000180009d120004;
+  *((unsigned long*)& __m256i_result[1]) = 0xc9da080067f50020;
+  *((unsigned long*)& __m256i_result[0]) = 0xc73c7fff6bbfffff;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffff8046867f79;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6651bfff80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00010001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00010001;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00001f41ffffbf00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x010180068080fff9;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x3ff1808001020101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x3ff1808001020101;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0800000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010103;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000040000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000010000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000040000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000040000010;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbea2e127c046721f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1729c073816edebe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xde91f010000006f9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ef1f90efefaf30d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000060000108;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001060005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fef0001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xbfa3e127c147721f;
+  *((unsigned long*)& __m256i_result[2]) = 0x1729c173836edfbe;
+  *((unsigned long*)& __m256i_result[1]) = 0xdf91f111808007fb;
+  *((unsigned long*)& __m256i_result[0]) = 0x5ff1f90ffffbf30f;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_result[2]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_result[1]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_result[0]) = 0xe161616161614f61;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01010101010000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808280808082;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808280808082;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808280808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x8080808280808082;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000082f8989a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000d58f43c8;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010183f9999b;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x01010101d58f43c9;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ffe7ffd7ffe7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ffe7ffd7ffe8001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0707feb70707b7d1;
+  *((unsigned long*)& __m256i_result[2]) = 0x65baa7efea95a985;
+  *((unsigned long*)& __m256i_result[1]) = 0x0707feb70707b7d1;
+  *((unsigned long*)& __m256i_result[0]) = 0x65baa7ef6a95a987;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x41cc5bb8a95fd1eb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x41cc5bb8a95fd1eb;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7b7b7b7b80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xcacacb1011040500;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7b7b7b7b80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xcacacb1011040500;
+  *((unsigned long*)& __m256i_result[3]) = 0x49cc5bb8a95fd1eb;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ff4080102102001;
+  *((unsigned long*)& __m256i_result[1]) = 0x49cc5bb8a95fd1eb;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff4080102102001;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010401;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010401;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010401;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010401;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0xdf01010153a10101;
+  *((unsigned long*)& __m256i_result[2]) = 0x5b7f01ff5b7f10ff;
+  *((unsigned long*)& __m256i_result[1]) = 0xdf01010153a10101;
+  *((unsigned long*)& __m256i_result[0]) = 0x5b7f01ff5b7f10ff;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000080000001000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000080000001000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080000000800;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_op0[2]) = 0x23222120171e151c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_op0[0]) = 0x23222120171e151c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x201fdfe0201fdfe0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x201fdfe0201fdfe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010127272525;
+  *((unsigned long*)& __m256i_result[2]) = 0x23a2a121179e951d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010127272525;
+  *((unsigned long*)& __m256i_result[0]) = 0x23a2a121179e951d;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x800080008000ffee;
+  *((unsigned long*)& __m256i_result[2]) = 0x800080008000ffee;
+  *((unsigned long*)& __m256i_result[1]) = 0x800080008000ffee;
+  *((unsigned long*)& __m256i_result[0]) = 0x800080008000ffee;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000100010001ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000100010001ffff;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00010000fffe0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00010000fffe0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00010000fffe0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00010000fffe0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x01010101010101c9;
+  __m256i_out = __lasx_xvbitset_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000affff800b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000affff800b;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000affff800b;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000affff800b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000800;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000400010004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000400010004;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000f0001000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000f0001000d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000f0001000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000f0001000d;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f010000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f010000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f010100000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f010100000101;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitset_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x498100814843ffe1;
+  *((unsigned long*)& __m256i_result[2]) = 0x4981008168410001;
+  *((unsigned long*)& __m256i_result[1]) = 0x498100814843ffe1;
+  *((unsigned long*)& __m256i_result[0]) = 0x4981008168410001;
+  __m256i_out = __lasx_xvbitset_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000090b0906;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100002000;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffd880;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffd880;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op0[2]) = 0x03acfc5303260e80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op0[0]) = 0x03acfc5303260e80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_result[3]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_result[2]) = 0x03acfc5303260e81;
+  *((unsigned long*)& __m256i_result[1]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_result[0]) = 0x03acfc5303260e81;
+  __m256i_out = __lasx_xvbitset_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0501030102141923;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffd5020738b43ddb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x010200023b8e4174;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff4ff4e11410b40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01fa022a01a401e5;
+  *((unsigned long*)& __m256i_op1[2]) = 0x030d03aa0079029b;
+  *((unsigned long*)& __m256i_op1[1]) = 0x024c01f901950261;
+  *((unsigned long*)& __m256i_op1[0]) = 0x008102c2008a029f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101070102041903;
+  *((unsigned long*)& __m256i_result[2]) = 0xdfd506073ab435db;
+  *((unsigned long*)& __m256i_result[1]) = 0x110202023bae4176;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff6ff4a15418b40;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0edf8d7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffbe8bc70f;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffe0edf8d7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffbe8bc70f;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe06df8d7;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffbe8b470f;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000010000ffe1;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000101001e18;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000010000ffe1;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000101001e18;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefefefefefe;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1d1a1b181d1a1b18;
+  *((unsigned long*)& __m256i_result[2]) = 0x9c9b9a999c9b9a99;
+  *((unsigned long*)& __m256i_result[1]) = 0x1d1a1b181d1a1b18;
+  *((unsigned long*)& __m256i_result[0]) = 0x9c9b9a999c9b9a99;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000033e87ef1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002e2100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x80008000b3e8fef1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x80008000802ea100;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c80780000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c80780000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0200000200000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2c27000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0200000200000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2c27000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000400000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000800080008000;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff00ff00ffff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff000000ff00ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffff00ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00000000ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000180000000;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x8001800180018001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x8001800180018001;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080000200000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00010002;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f6f7f7f7f6;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f6f7f7f7f6;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f6f7f7f7f6;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7f7f7f6f7f7f7f6;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7eeefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x7eeefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000010000fffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000010000fffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000fffe;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000004;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvbitrev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000008000b;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000008000b;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000008000a;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000008000a;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000a;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000100010001fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000100010001fffe;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x40fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x40fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x40fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x40fe00fe00fe00fe;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffc0007ffe0002;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000400000018002;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffc0007ffe0002;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000400000018002;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefe01010101;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefe01010101;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000400008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000400008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x010101010101016c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101410128;
+  *((unsigned long*)& __m256i_result[1]) = 0x010101010101016c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101410128;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x800000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x800000ff000000ff;
+  __m256i_out = __lasx_xvbitrev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffb6811fffff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff97c120000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffb6811fffff80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff97c120000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe97c020010001;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefefefefe7f;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefefefefe7f;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010081;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100018080;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010110;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010110;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvbitrev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe06df8d7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffbe8b470f;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe06df0d7;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffbe8b470f;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010ffc80010ff52;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff1ffca0011ffcb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010ffc80010ff52;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff1ffca0011ffcb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010bfc80010bf52;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff1bfca0011bfcb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010bfc80010bf52;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff1bfca0011bfcb;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256i_op0[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256i_op0[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000005136aaaaa8;
+  *((unsigned long*)& __m256i_result[2]) = 0x55515551aaaaaaa8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000005136aaaaa8;
+  *((unsigned long*)& __m256i_result[0]) = 0x55515551aaaaaaa8;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fdf000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fdf000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fdf7fff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fdf7fff00000000;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0x35);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000fd0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000fd0000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007ffe7ffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe7ffe7ffe8000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000807e7ffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f7e7f7e7f7e7f7e;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f7e7f7e;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f7e7f7e7f7e0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007e7f7e;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0x24);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf01010153a10101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5b7f01ff5b7f10ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf01010153a10101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5b7f01ff5b7f10ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xcf01010143a10101;
+  *((unsigned long*)& __m256i_result[2]) = 0x4b6f01ef4b6f00ef;
+  *((unsigned long*)& __m256i_result[1]) = 0xcf01010143a10101;
+  *((unsigned long*)& __m256i_result[0]) = 0x4b6f01ef4b6f00ef;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xdfffffffdfffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xdfffffffdfffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff7fff7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff7f027f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff7f0100;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00fe00fe7f027f;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256i_result[3]) = 0x8011ffae800c000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x00baff050083ff3c;
+  *((unsigned long*)& __m256i_result[1]) = 0x80b900b980380038;
+  *((unsigned long*)& __m256i_result[0]) = 0x0017ffa8008eff31;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0x3b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_op0[2]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_op0[0]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_result[2]) = 0x23222120171e151c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_result[0]) = 0x23222120171e151c;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fefe0000fefe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fefe0000fefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00fe00fe00fe00fe;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0x26);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fe1ffe0ffe1ffe0;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_d(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffb;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffffb;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800200028;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffee00ba;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffee00ba;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xefefefefefee00aa;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xefefefefefee00aa;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000f788f788;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitclri_w(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffefffefffefffe;
+  __m256i_out = __lasx_xvbitclri_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcf800fffcf800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcf800fffcf800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080000000800;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000040000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000040000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f00007fff;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x2a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202020202020202;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000000;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010101010;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000004000000;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000013;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000013;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001000000fb;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808180808093;
+  *((unsigned long*)& __m256i_result[2]) = 0x80808081808080fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808180808093;
+  *((unsigned long*)& __m256i_result[0]) = 0x80808081808080fb;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000020;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010000000100000;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_result[3]) = 0x1000100054445443;
+  *((unsigned long*)& __m256i_result[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_result[1]) = 0x1000100054445443;
+  *((unsigned long*)& __m256i_result[0]) = 0x7bbbbbbbf7777778;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffa2078fffa2074;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffa2078fffa2074;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffeffebfb7afb62;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[3]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[2]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[1]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[0]) = 0xe7e7e7e7e7e7e7e7;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020206431;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0003030300000300;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0003030300000300;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003030300000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0003030300000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0043030300400300;
+  *((unsigned long*)& __m256i_result[2]) = 0x0043030300400300;
+  *((unsigned long*)& __m256i_result[1]) = 0x0043030300400100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0043030300400100;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3870ca8d013e76a0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ec0a1b2aba7ed0;
+  *((unsigned long*)& __m256i_result[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x3870ca9d013e76b0;
+  *((unsigned long*)& __m256i_result[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_result[0]) = 0x43ec0a1b2aba7ed0;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f8f7f8f800f800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f780000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f8f7f80000fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f780000ff80;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f8f7f8f800f800;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003f784000ff80;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f8f7f84000fff9;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f784000ff80;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x4040404040404040;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe1ffe0ffe1ffe0;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffefef800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffefef800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000008000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffefef800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000008000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffefef800;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x27);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000000020000;
+  __m256i_out = __lasx_xvbitseti_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[2]) = 0x00020002000230ba;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[0]) = 0x00020002000230ba;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x8100810081008100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x8100810081008100;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007878;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007878;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000107878;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000107878;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256i_result[3]) = 0x4000400140004001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffff2f640006f48;
+  *((unsigned long*)& __m256i_result[1]) = 0x4000400140004001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffff2f640006f48;
+  __m256i_out = __lasx_xvbitseti_h(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvbitseti_d(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x4040404040404040;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_result[3]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_result[2]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_result[1]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_result[0]) = 0xfd12fd12fd12fd12;
+  __m256i_out = __lasx_xvbitseti_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff00ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff00ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x01010101fe01fe01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x01010101fe01fe01;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_result[1]) = 0x2000200020002000;
+  *((unsigned long*)& __m256i_result[0]) = 0x2000200020002000;
+  __m256i_out = __lasx_xvbitrevi_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7ff77fff7ff7;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7ff77fff7ff7;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000020001;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010121011;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbitrevi_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004000000040;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000020000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000020000000000;
+  __m256i_out = __lasx_xvbitrevi_d(__m256i_op0,0x29);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x4040404040404040;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001c4e8ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001c4e8ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0081c4e8ff7fffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0081c4e8ff7fffff;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f7f017ffd;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f7f017ffd;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvbitrevi_d(__m256i_op0,0x3e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002080100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002080100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000008000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000a080100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000008000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000a080100;
+  __m256i_out = __lasx_xvbitrevi_d(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100010001000100;
+  __m256i_out = __lasx_xvbitrevi_h(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_result[3]) = 0xfee1057c01e10581;
+  *((unsigned long*)& __m256i_result[2]) = 0x011ec1210161057b;
+  *((unsigned long*)& __m256i_result[1]) = 0xfee1057c01e10581;
+  *((unsigned long*)& __m256i_result[0]) = 0x011ec1210161057b;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_result[2]) = 0xe27fe2821d226278;
+  *((unsigned long*)& __m256i_result[1]) = 0xfdfdfdfdfdfdfdfd;
+  *((unsigned long*)& __m256i_result[0]) = 0xe27fe2821d226278;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000008;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_result[3]) = 0x080808000828082f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080008280820;
+  *((unsigned long*)& __m256i_result[1]) = 0x080808000828082f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080008280820;
+  __m256i_out = __lasx_xvbitrevi_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvbitrevi_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000800000000000;
+  __m256i_out = __lasx_xvbitrevi_d(__m256i_op0,0x2f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0200000002000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x02000000fdffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0200000002000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x02000000fdffffff;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffeffed;
+  __m256i_out = __lasx_xvbitrevi_d(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_result[2]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_result[1]) = 0xc03b000200020002;
+  *((unsigned long*)& __m256i_result[0]) = 0xc03b000200020002;
+  __m256i_out = __lasx_xvbitrevi_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff81007fff0100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff81007fff0100;
+  __m256i_out = __lasx_xvbitrevi_w(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-builtin.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-builtin.c
new file mode 100644
index 00000000000..ecb8d639bdd
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-builtin.c
@@ -0,0 +1,1509 @@
+/* Test builtins for LOONGARCH LASX ASE instructions */
+/* { dg-do compile } */
+/* { dg-options "-mlasx" } */
+/* { dg-final { scan-assembler-times "lasx_xvsll_b:.*xvsll\\.b.*lasx_xvsll_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsll_h:.*xvsll\\.h.*lasx_xvsll_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsll_w:.*xvsll\\.w.*lasx_xvsll_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsll_d:.*xvsll\\.d.*lasx_xvsll_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslli_b:.*xvslli\\.b.*lasx_xvslli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslli_h:.*xvslli\\.h.*lasx_xvslli_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslli_w:.*xvslli\\.w.*lasx_xvslli_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslli_d:.*xvslli\\.d.*lasx_xvslli_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsra_b:.*xvsra\\.b.*lasx_xvsra_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsra_h:.*xvsra\\.h.*lasx_xvsra_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsra_w:.*xvsra\\.w.*lasx_xvsra_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsra_d:.*xvsra\\.d.*lasx_xvsra_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrai_b:.*xvsrai\\.b.*lasx_xvsrai_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrai_h:.*xvsrai\\.h.*lasx_xvsrai_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrai_w:.*xvsrai\\.w.*lasx_xvsrai_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrai_d:.*xvsrai\\.d.*lasx_xvsrai_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrar_b:.*xvsrar\\.b.*lasx_xvsrar_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrar_h:.*xvsrar\\.h.*lasx_xvsrar_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrar_w:.*xvsrar\\.w.*lasx_xvsrar_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrar_d:.*xvsrar\\.d.*lasx_xvsrar_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrari_b:.*xvsrari\\.b.*lasx_xvsrari_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrari_h:.*xvsrari\\.h.*lasx_xvsrari_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrari_w:.*xvsrari\\.w.*lasx_xvsrari_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrari_d:.*xvsrari\\.d.*lasx_xvsrari_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrl_b:.*xvsrl\\.b.*lasx_xvsrl_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrl_h:.*xvsrl\\.h.*lasx_xvsrl_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrl_w:.*xvsrl\\.w.*lasx_xvsrl_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrl_d:.*xvsrl\\.d.*lasx_xvsrl_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrli_b:.*xvsrli\\.b.*lasx_xvsrli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrli_h:.*xvsrli\\.h.*lasx_xvsrli_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrli_w:.*xvsrli\\.w.*lasx_xvsrli_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrli_d:.*xvsrli\\.d.*lasx_xvsrli_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlr_b:.*xvsrlr\\.b.*lasx_xvsrlr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlr_h:.*xvsrlr\\.h.*lasx_xvsrlr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlr_w:.*xvsrlr\\.w.*lasx_xvsrlr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlr_d:.*xvsrlr\\.d.*lasx_xvsrlr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlri_b:.*xvsrlri\\.b.*lasx_xvsrlri_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlri_h:.*xvsrlri\\.h.*lasx_xvsrlri_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlri_w:.*xvsrlri\\.w.*lasx_xvsrlri_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlri_d:.*xvsrlri\\.d.*lasx_xvsrlri_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclr_b:.*xvbitclr\\.b.*lasx_xvbitclr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclr_h:.*xvbitclr\\.h.*lasx_xvbitclr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclr_w:.*xvbitclr\\.w.*lasx_xvbitclr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclr_d:.*xvbitclr\\.d.*lasx_xvbitclr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclri_b:.*xvbitclri\\.b.*lasx_xvbitclri_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclri_h:.*xvbitclri\\.h.*lasx_xvbitclri_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclri_w:.*xvbitclri\\.w.*lasx_xvbitclri_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitclri_d:.*xvbitclri\\.d.*lasx_xvbitclri_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitset_b:.*xvbitset\\.b.*lasx_xvbitset_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitset_h:.*xvbitset\\.h.*lasx_xvbitset_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitset_w:.*xvbitset\\.w.*lasx_xvbitset_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitset_d:.*xvbitset\\.d.*lasx_xvbitset_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitseti_b:.*xvbitseti\\.b.*lasx_xvbitseti_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitseti_h:.*xvbitseti\\.h.*lasx_xvbitseti_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitseti_w:.*xvbitseti\\.w.*lasx_xvbitseti_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitseti_d:.*xvbitseti\\.d.*lasx_xvbitseti_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrev_b:.*xvbitrev\\.b.*lasx_xvbitrev_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrev_h:.*xvbitrev\\.h.*lasx_xvbitrev_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrev_w:.*xvbitrev\\.w.*lasx_xvbitrev_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrev_d:.*xvbitrev\\.d.*lasx_xvbitrev_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrevi_b:.*xvbitrevi\\.b.*lasx_xvbitrevi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrevi_h:.*xvbitrevi\\.h.*lasx_xvbitrevi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrevi_w:.*xvbitrevi\\.w.*lasx_xvbitrevi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitrevi_d:.*xvbitrevi\\.d.*lasx_xvbitrevi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadd_b:.*xvadd\\.b.*lasx_xvadd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadd_h:.*xvadd\\.h.*lasx_xvadd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadd_w:.*xvadd\\.w.*lasx_xvadd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadd_d:.*xvadd\\.d.*lasx_xvadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddi_bu:.*xvaddi\\.bu.*lasx_xvaddi_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddi_hu:.*xvaddi\\.hu.*lasx_xvaddi_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddi_wu:.*xvaddi\\.wu.*lasx_xvaddi_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddi_du:.*xvaddi\\.du.*lasx_xvaddi_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsub_b:.*xvsub\\.b.*lasx_xvsub_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsub_h:.*xvsub\\.h.*lasx_xvsub_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsub_w:.*xvsub\\.w.*lasx_xvsub_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsub_d:.*xvsub\\.d.*lasx_xvsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubi_bu:.*xvsubi\\.bu.*lasx_xvsubi_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubi_hu:.*xvsubi\\.hu.*lasx_xvsubi_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubi_wu:.*xvsubi\\.wu.*lasx_xvsubi_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubi_du:.*xvsubi\\.du.*lasx_xvsubi_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_b:.*xvmax\\.b.*lasx_xvmax_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_h:.*xvmax\\.h.*lasx_xvmax_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_w:.*xvmax\\.w.*lasx_xvmax_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_d:.*xvmax\\.d.*lasx_xvmax_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_b:.*xvmaxi\\.b.*lasx_xvmaxi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_h:.*xvmaxi\\.h.*lasx_xvmaxi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_w:.*xvmaxi\\.w.*lasx_xvmaxi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_d:.*xvmaxi\\.d.*lasx_xvmaxi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_bu:.*xvmax\\.bu.*lasx_xvmax_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_hu:.*xvmax\\.hu.*lasx_xvmax_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_wu:.*xvmax\\.wu.*lasx_xvmax_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmax_du:.*xvmax\\.du.*lasx_xvmax_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_bu:.*xvmaxi\\.bu.*lasx_xvmaxi_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_hu:.*xvmaxi\\.hu.*lasx_xvmaxi_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_wu:.*xvmaxi\\.wu.*lasx_xvmaxi_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaxi_du:.*xvmaxi\\.du.*lasx_xvmaxi_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_b:.*xvmin\\.b.*lasx_xvmin_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_h:.*xvmin\\.h.*lasx_xvmin_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_w:.*xvmin\\.w.*lasx_xvmin_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_d:.*xvmin\\.d.*lasx_xvmin_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_b:.*xvmini\\.b.*lasx_xvmini_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_h:.*xvmini\\.h.*lasx_xvmini_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_w:.*xvmini\\.w.*lasx_xvmini_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_d:.*xvmini\\.d.*lasx_xvmini_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_bu:.*xvmin\\.bu.*lasx_xvmin_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_hu:.*xvmin\\.hu.*lasx_xvmin_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_wu:.*xvmin\\.wu.*lasx_xvmin_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmin_du:.*xvmin\\.du.*lasx_xvmin_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_bu:.*xvmini\\.bu.*lasx_xvmini_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_hu:.*xvmini\\.hu.*lasx_xvmini_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_wu:.*xvmini\\.wu.*lasx_xvmini_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmini_du:.*xvmini\\.du.*lasx_xvmini_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseq_b:.*xvseq\\.b.*lasx_xvseq_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseq_h:.*xvseq\\.h.*lasx_xvseq_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseq_w:.*xvseq\\.w.*lasx_xvseq_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseq_d:.*xvseq\\.d.*lasx_xvseq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseqi_b:.*xvseqi\\.b.*lasx_xvseqi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseqi_h:.*xvseqi\\.h.*lasx_xvseqi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseqi_w:.*xvseqi\\.w.*lasx_xvseqi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvseqi_d:.*xvseqi\\.d.*lasx_xvseqi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_b:.*xvslt\\.b.*lasx_xvslt_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_h:.*xvslt\\.h.*lasx_xvslt_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_w:.*xvslt\\.w.*lasx_xvslt_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_d:.*xvslt\\.d.*lasx_xvslt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_b:.*xvslti\\.b.*lasx_xvslti_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_h:.*xvslti\\.h.*lasx_xvslti_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_w:.*xvslti\\.w.*lasx_xvslti_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_d:.*xvslti\\.d.*lasx_xvslti_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_bu:.*xvslt\\.bu.*lasx_xvslt_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_hu:.*xvslt\\.hu.*lasx_xvslt_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_wu:.*xvslt\\.wu.*lasx_xvslt_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslt_du:.*xvslt\\.du.*lasx_xvslt_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_bu:.*xvslti\\.bu.*lasx_xvslti_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_hu:.*xvslti\\.hu.*lasx_xvslti_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_wu:.*xvslti\\.wu.*lasx_xvslti_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslti_du:.*xvslti\\.du.*lasx_xvslti_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_b:.*xvsle\\.b.*lasx_xvsle_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_h:.*xvsle\\.h.*lasx_xvsle_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_w:.*xvsle\\.w.*lasx_xvsle_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_d:.*xvsle\\.d.*lasx_xvsle_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_b:.*xvslei\\.b.*lasx_xvslei_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_h:.*xvslei\\.h.*lasx_xvslei_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_w:.*xvslei\\.w.*lasx_xvslei_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_d:.*xvslei\\.d.*lasx_xvslei_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_bu:.*xvsle\\.bu.*lasx_xvsle_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_hu:.*xvsle\\.hu.*lasx_xvsle_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_wu:.*xvsle\\.wu.*lasx_xvsle_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsle_du:.*xvsle\\.du.*lasx_xvsle_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_bu:.*xvslei\\.bu.*lasx_xvslei_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_hu:.*xvslei\\.hu.*lasx_xvslei_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_wu:.*xvslei\\.wu.*lasx_xvslei_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvslei_du:.*xvslei\\.du.*lasx_xvslei_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_b:.*xvsat\\.b.*lasx_xvsat_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_h:.*xvsat\\.h.*lasx_xvsat_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_w:.*xvsat\\.w.*lasx_xvsat_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_d:.*xvsat\\.d.*lasx_xvsat_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_bu:.*xvsat\\.bu.*lasx_xvsat_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_hu:.*xvsat\\.hu.*lasx_xvsat_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_wu:.*xvsat\\.wu.*lasx_xvsat_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsat_du:.*xvsat\\.du.*lasx_xvsat_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadda_b:.*xvadda\\.b.*lasx_xvadda_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadda_h:.*xvadda\\.h.*lasx_xvadda_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadda_w:.*xvadda\\.w.*lasx_xvadda_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadda_d:.*xvadda\\.d.*lasx_xvadda_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_b:.*xvsadd\\.b.*lasx_xvsadd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_h:.*xvsadd\\.h.*lasx_xvsadd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_w:.*xvsadd\\.w.*lasx_xvsadd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_d:.*xvsadd\\.d.*lasx_xvsadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_bu:.*xvsadd\\.bu.*lasx_xvsadd_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_hu:.*xvsadd\\.hu.*lasx_xvsadd_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_wu:.*xvsadd\\.wu.*lasx_xvsadd_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsadd_du:.*xvsadd\\.du.*lasx_xvsadd_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_b:.*xvavg\\.b.*lasx_xvavg_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_h:.*xvavg\\.h.*lasx_xvavg_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_w:.*xvavg\\.w.*lasx_xvavg_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_d:.*xvavg\\.d.*lasx_xvavg_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_bu:.*xvavg\\.bu.*lasx_xvavg_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_hu:.*xvavg\\.hu.*lasx_xvavg_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_wu:.*xvavg\\.wu.*lasx_xvavg_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavg_du:.*xvavg\\.du.*lasx_xvavg_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_b:.*xvavgr\\.b.*lasx_xvavgr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_h:.*xvavgr\\.h.*lasx_xvavgr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_w:.*xvavgr\\.w.*lasx_xvavgr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_d:.*xvavgr\\.d.*lasx_xvavgr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_bu:.*xvavgr\\.bu.*lasx_xvavgr_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_hu:.*xvavgr\\.hu.*lasx_xvavgr_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_wu:.*xvavgr\\.wu.*lasx_xvavgr_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvavgr_du:.*xvavgr\\.du.*lasx_xvavgr_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_b:.*xvssub\\.b.*lasx_xvssub_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_h:.*xvssub\\.h.*lasx_xvssub_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_w:.*xvssub\\.w.*lasx_xvssub_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_d:.*xvssub\\.d.*lasx_xvssub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_bu:.*xvssub\\.bu.*lasx_xvssub_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_hu:.*xvssub\\.hu.*lasx_xvssub_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_wu:.*xvssub\\.wu.*lasx_xvssub_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssub_du:.*xvssub\\.du.*lasx_xvssub_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_b:.*xvabsd\\.b.*lasx_xvabsd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_h:.*xvabsd\\.h.*lasx_xvabsd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_w:.*xvabsd\\.w.*lasx_xvabsd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_d:.*xvabsd\\.d.*lasx_xvabsd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_bu:.*xvabsd\\.bu.*lasx_xvabsd_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_hu:.*xvabsd\\.hu.*lasx_xvabsd_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_wu:.*xvabsd\\.wu.*lasx_xvabsd_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvabsd_du:.*xvabsd\\.du.*lasx_xvabsd_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmul_b:.*xvmul\\.b.*lasx_xvmul_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmul_h:.*xvmul\\.h.*lasx_xvmul_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmul_w:.*xvmul\\.w.*lasx_xvmul_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmul_d:.*xvmul\\.d.*lasx_xvmul_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmadd_b:.*xvmadd\\.b.*lasx_xvmadd_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmadd_h:.*xvmadd\\.h.*lasx_xvmadd_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmadd_w:.*xvmadd\\.w.*lasx_xvmadd_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmadd_d:.*xvmadd\\.d.*lasx_xvmadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmsub_b:.*xvmsub\\.b.*lasx_xvmsub_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmsub_h:.*xvmsub\\.h.*lasx_xvmsub_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmsub_w:.*xvmsub\\.w.*lasx_xvmsub_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmsub_d:.*xvmsub\\.d.*lasx_xvmsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_b:.*xvdiv\\.b.*lasx_xvdiv_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_h:.*xvdiv\\.h.*lasx_xvdiv_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_w:.*xvdiv\\.w.*lasx_xvdiv_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_d:.*xvdiv\\.d.*lasx_xvdiv_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_bu:.*xvdiv\\.bu.*lasx_xvdiv_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_hu:.*xvdiv\\.hu.*lasx_xvdiv_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_wu:.*xvdiv\\.wu.*lasx_xvdiv_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvdiv_du:.*xvdiv\\.du.*lasx_xvdiv_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_h_b:.*xvhaddw\\.h\\.b.*lasx_xvhaddw_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_w_h:.*xvhaddw\\.w\\.h.*lasx_xvhaddw_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_d_w:.*xvhaddw\\.d\\.w.*lasx_xvhaddw_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_hu_bu:.*xvhaddw\\.hu\\.bu.*lasx_xvhaddw_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_wu_hu:.*xvhaddw\\.wu\\.hu.*lasx_xvhaddw_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_du_wu:.*xvhaddw\\.du\\.wu.*lasx_xvhaddw_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_h_b:.*xvhsubw\\.h\\.b.*lasx_xvhsubw_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_w_h:.*xvhsubw\\.w\\.h.*lasx_xvhsubw_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_d_w:.*xvhsubw\\.d\\.w.*lasx_xvhsubw_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_hu_bu:.*xvhsubw\\.hu\\.bu.*lasx_xvhsubw_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_wu_hu:.*xvhsubw\\.wu\\.hu.*lasx_xvhsubw_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_du_wu:.*xvhsubw\\.du\\.wu.*lasx_xvhsubw_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_b:.*xvmod\\.b.*lasx_xvmod_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_h:.*xvmod\\.h.*lasx_xvmod_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_w:.*xvmod\\.w.*lasx_xvmod_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_d:.*xvmod\\.d.*lasx_xvmod_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_bu:.*xvmod\\.bu.*lasx_xvmod_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_hu:.*xvmod\\.hu.*lasx_xvmod_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_wu:.*xvmod\\.wu.*lasx_xvmod_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmod_du:.*xvmod\\.du.*lasx_xvmod_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepl128vei_b:.*xvrepl128vei\\.b.*lasx_xvrepl128vei_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepl128vei_h:.*xvrepl128vei\\.h.*lasx_xvrepl128vei_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepl128vei_w:.*xvrepl128vei\\.w.*lasx_xvrepl128vei_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepl128vei_d:.*xvrepl128vei\\.d.*lasx_xvrepl128vei_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickev_b:.*xvpickev\\.b.*lasx_xvpickev_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickev_h:.*xvpickev\\.h.*lasx_xvpickev_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickev_w:.*xvpickev\\.w.*lasx_xvpickev_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickev_d:.*xvilvl\\.d.*lasx_xvpickev_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickod_b:.*xvpickod\\.b.*lasx_xvpickod_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickod_h:.*xvpickod\\.h.*lasx_xvpickod_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickod_w:.*xvpickod\\.w.*lasx_xvpickod_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickod_d:.*xvilvh\\.d.*lasx_xvpickod_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvh_b:.*xvilvh\\.b.*lasx_xvilvh_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvh_h:.*xvilvh\\.h.*lasx_xvilvh_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvh_w:.*xvilvh\\.w.*lasx_xvilvh_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvh_d:.*xvilvh\\.d.*lasx_xvilvh_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvl_b:.*xvilvl\\.b.*lasx_xvilvl_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvl_h:.*xvilvl\\.h.*lasx_xvilvl_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvl_w:.*xvilvl\\.w.*lasx_xvilvl_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvilvl_d:.*xvilvl\\.d.*lasx_xvilvl_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackev_b:.*xvpackev\\.b.*lasx_xvpackev_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackev_h:.*xvpackev\\.h.*lasx_xvpackev_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackev_w:.*xvpackev\\.w.*lasx_xvpackev_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackev_d:.*xvilvl\\.d.*lasx_xvpackev_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackod_b:.*xvpackod\\.b.*lasx_xvpackod_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackod_h:.*xvpackod\\.h.*lasx_xvpackod_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackod_w:.*xvpackod\\.w.*lasx_xvpackod_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpackod_d:.*xvilvh\\.d.*lasx_xvpackod_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf_b:.*xvshuf\\.b.*lasx_xvshuf_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf_h:.*xvshuf\\.h.*lasx_xvshuf_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf_w:.*xvshuf\\.w.*lasx_xvshuf_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf_d:.*xvshuf\\.d.*lasx_xvshuf_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvand_v:.*xvand\\.v.*lasx_xvand_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvandi_b:.*xvandi\\.b.*lasx_xvandi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvor_v:.*xvor\\.v.*lasx_xvor_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvori_b:.*xvbitseti\\.b.*lasx_xvori_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvnor_v:.*xvnor\\.v.*lasx_xvnor_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvnori_b:.*xvnori\\.b.*lasx_xvnori_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvxor_v:.*xvxor\\.v.*lasx_xvxor_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvxori_b:.*xvbitrevi\\.b.*lasx_xvxori_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitsel_v:.*xvbitsel\\.v.*lasx_xvbitsel_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbitseli_b:.*xvbitseli\\.b.*lasx_xvbitseli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf4i_b:.*xvshuf4i\\.b.*lasx_xvshuf4i_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf4i_h:.*xvshuf4i\\.h.*lasx_xvshuf4i_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf4i_w:.*xvshuf4i\\.w.*lasx_xvshuf4i_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplgr2vr_b:.*xvreplgr2vr\\.b.*lasx_xvreplgr2vr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplgr2vr_h:.*xvreplgr2vr\\.h.*lasx_xvreplgr2vr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplgr2vr_w:.*xvreplgr2vr\\.w.*lasx_xvreplgr2vr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplgr2vr_d:.*xvreplgr2vr\\.d.*lasx_xvreplgr2vr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpcnt_b:.*xvpcnt\\.b.*lasx_xvpcnt_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpcnt_h:.*xvpcnt\\.h.*lasx_xvpcnt_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpcnt_w:.*xvpcnt\\.w.*lasx_xvpcnt_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpcnt_d:.*xvpcnt\\.d.*lasx_xvpcnt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclo_b:.*xvclo\\.b.*lasx_xvclo_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclo_h:.*xvclo\\.h.*lasx_xvclo_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclo_w:.*xvclo\\.w.*lasx_xvclo_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclo_d:.*xvclo\\.d.*lasx_xvclo_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclz_b:.*xvclz\\.b.*lasx_xvclz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclz_h:.*xvclz\\.h.*lasx_xvclz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclz_w:.*xvclz\\.w.*lasx_xvclz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvclz_d:.*xvclz\\.d.*lasx_xvclz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfadd_s:.*xvfadd\\.s.*lasx_xvfadd_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfadd_d:.*xvfadd\\.d.*lasx_xvfadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfsub_s:.*xvfsub\\.s.*lasx_xvfsub_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfsub_d:.*xvfsub\\.d.*lasx_xvfsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmul_s:.*xvfmul\\.s.*lasx_xvfmul_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmul_d:.*xvfmul\\.d.*lasx_xvfmul_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfdiv_s:.*xvfdiv\\.s.*lasx_xvfdiv_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfdiv_d:.*xvfdiv\\.d.*lasx_xvfdiv_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcvt_h_s:.*xvfcvt\\.h\\.s.*lasx_xvfcvt_h_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcvt_s_d:.*xvfcvt\\.s\\.d.*lasx_xvfcvt_s_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmin_s:.*xvfmin\\.s.*lasx_xvfmin_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmin_d:.*xvfmin\\.d.*lasx_xvfmin_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmina_s:.*xvfmina\\.s.*lasx_xvfmina_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmina_d:.*xvfmina\\.d.*lasx_xvfmina_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmax_s:.*xvfmax\\.s.*lasx_xvfmax_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmax_d:.*xvfmax\\.d.*lasx_xvfmax_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmaxa_s:.*xvfmaxa\\.s.*lasx_xvfmaxa_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmaxa_d:.*xvfmaxa\\.d.*lasx_xvfmaxa_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfclass_s:.*xvfclass\\.s.*lasx_xvfclass_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfclass_d:.*xvfclass\\.d.*lasx_xvfclass_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfsqrt_s:.*xvfsqrt\\.s.*lasx_xvfsqrt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfsqrt_d:.*xvfsqrt\\.d.*lasx_xvfsqrt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrecip_s:.*xvfrecip\\.s.*lasx_xvfrecip_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrecip_d:.*xvfrecip\\.d.*lasx_xvfrecip_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrint_s:.*xvfrint\\.s.*lasx_xvfrint_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrint_d:.*xvfrint\\.d.*lasx_xvfrint_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrsqrt_s:.*xvfrsqrt\\.s.*lasx_xvfrsqrt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrsqrt_d:.*xvfrsqrt\\.d.*lasx_xvfrsqrt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvflogb_s:.*xvflogb\\.s.*lasx_xvflogb_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvflogb_d:.*xvflogb\\.d.*lasx_xvflogb_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcvth_s_h:.*xvfcvth\\.s\\.h.*lasx_xvfcvth_s_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcvth_d_s:.*xvfcvth\\.d\\.s.*lasx_xvfcvth_d_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcvtl_s_h:.*xvfcvtl\\.s\\.h.*lasx_xvfcvtl_s_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcvtl_d_s:.*xvfcvtl\\.d\\.s.*lasx_xvfcvtl_d_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftint_w_s:.*xvftint\\.w\\.s.*lasx_xvftint_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftint_l_d:.*xvftint\\.l\\.d.*lasx_xvftint_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftint_wu_s:.*xvftint\\.wu\\.s.*lasx_xvftint_wu_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftint_lu_d:.*xvftint\\.lu\\.d.*lasx_xvftint_lu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrz_w_s:.*xvftintrz\\.w\\.s.*lasx_xvftintrz_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrz_l_d:.*xvftintrz\\.l\\.d.*lasx_xvftintrz_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrz_wu_s:.*xvftintrz\\.wu\\.s.*lasx_xvftintrz_wu_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrz_lu_d:.*xvftintrz\\.lu\\.d.*lasx_xvftintrz_lu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffint_s_w:.*xvffint\\.s\\.w.*lasx_xvffint_s_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffint_d_l:.*xvffint\\.d\\.l.*lasx_xvffint_d_l" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffint_s_wu:.*xvffint\\.s\\.wu.*lasx_xvffint_s_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffint_d_lu:.*xvffint\\.d\\.lu.*lasx_xvffint_d_lu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve_b:.*xvreplve\\.b.*lasx_xvreplve_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve_h:.*xvreplve\\.h.*lasx_xvreplve_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve_w:.*xvreplve\\.w.*lasx_xvreplve_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve_d:.*xvreplve\\.d.*lasx_xvreplve_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpermi_w:.*xvpermi\\.w.*lasx_xvpermi_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvandn_v:.*xvandn\\.v.*lasx_xvandn_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvneg_b:.*xvneg\\.b.*lasx_xvneg_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvneg_h:.*xvneg\\.h.*lasx_xvneg_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvneg_w:.*xvneg\\.w.*lasx_xvneg_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvneg_d:.*xvneg\\.d.*lasx_xvneg_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_b:.*xvmuh\\.b.*lasx_xvmuh_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_h:.*xvmuh\\.h.*lasx_xvmuh_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_w:.*xvmuh\\.w.*lasx_xvmuh_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_d:.*xvmuh\\.d.*lasx_xvmuh_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_bu:.*xvmuh\\.bu.*lasx_xvmuh_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_hu:.*xvmuh\\.hu.*lasx_xvmuh_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_wu:.*xvmuh\\.wu.*lasx_xvmuh_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmuh_du:.*xvmuh\\.du.*lasx_xvmuh_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsllwil_h_b:.*xvsllwil\\.h\\.b.*lasx_xvsllwil_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsllwil_w_h:.*xvsllwil\\.w\\.h.*lasx_xvsllwil_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsllwil_d_w:.*xvsllwil\\.d\\.w.*lasx_xvsllwil_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsllwil_hu_bu:.*xvsllwil\\.hu\\.bu.*lasx_xvsllwil_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsllwil_wu_hu:.*xvsllwil\\.wu\\.hu.*lasx_xvsllwil_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsllwil_du_wu:.*xvsllwil\\.du\\.wu.*lasx_xvsllwil_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsran_b_h:.*xvsran\\.b\\.h.*lasx_xvsran_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsran_h_w:.*xvsran\\.h\\.w.*lasx_xvsran_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsran_w_d:.*xvsran\\.w\\.d.*lasx_xvsran_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssran_b_h:.*xvssran\\.b\\.h.*lasx_xvssran_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssran_h_w:.*xvssran\\.h\\.w.*lasx_xvssran_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssran_w_d:.*xvssran\\.w\\.d.*lasx_xvssran_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssran_bu_h:.*xvssran\\.bu\\.h.*lasx_xvssran_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssran_hu_w:.*xvssran\\.hu\\.w.*lasx_xvssran_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssran_wu_d:.*xvssran\\.wu\\.d.*lasx_xvssran_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarn_b_h:.*xvsrarn\\.b\\.h.*lasx_xvsrarn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarn_h_w:.*xvsrarn\\.h\\.w.*lasx_xvsrarn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarn_w_d:.*xvsrarn\\.w\\.d.*lasx_xvsrarn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarn_b_h:.*xvssrarn\\.b\\.h.*lasx_xvssrarn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarn_h_w:.*xvssrarn\\.h\\.w.*lasx_xvssrarn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarn_w_d:.*xvssrarn\\.w\\.d.*lasx_xvssrarn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarn_bu_h:.*xvssrarn\\.bu\\.h.*lasx_xvssrarn_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarn_hu_w:.*xvssrarn\\.hu\\.w.*lasx_xvssrarn_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarn_wu_d:.*xvssrarn\\.wu\\.d.*lasx_xvssrarn_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrln_b_h:.*xvsrln\\.b\\.h.*lasx_xvsrln_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrln_h_w:.*xvsrln\\.h\\.w.*lasx_xvsrln_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrln_w_d:.*xvsrln\\.w\\.d.*lasx_xvsrln_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrln_bu_h:.*xvssrln\\.bu\\.h.*lasx_xvssrln_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrln_hu_w:.*xvssrln\\.hu\\.w.*lasx_xvssrln_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrln_wu_d:.*xvssrln\\.wu\\.d.*lasx_xvssrln_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrn_b_h:.*xvsrlrn\\.b\\.h.*lasx_xvsrlrn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrn_h_w:.*xvsrlrn\\.h\\.w.*lasx_xvsrlrn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrn_w_d:.*xvsrlrn\\.w\\.d.*lasx_xvsrlrn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrn_bu_h:.*xvssrlrn\\.bu\\.h.*lasx_xvssrlrn_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrn_hu_w:.*xvssrlrn\\.hu\\.w.*lasx_xvssrlrn_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrn_wu_d:.*xvssrlrn\\.wu\\.d.*lasx_xvssrlrn_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrstpi_b:.*xvfrstpi\\.b.*lasx_xvfrstpi_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrstpi_h:.*xvfrstpi\\.h.*lasx_xvfrstpi_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrstp_b:.*xvfrstp\\.b.*lasx_xvfrstp_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrstp_h:.*xvfrstp\\.h.*lasx_xvfrstp_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvshuf4i_d:.*xvshuf4i\\.d.*lasx_xvshuf4i_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbsrl_v:.*xvbsrl\\.v.*lasx_xvbsrl_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvbsll_v:.*xvbsll\\.v.*lasx_xvbsll_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvextrins_b:.*xvextrins\\.b.*lasx_xvextrins_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvextrins_h:.*xvextrins\\.h.*lasx_xvextrins_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvextrins_w:.*xvextrins\\.w.*lasx_xvextrins_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvextrins_d:.*xvextrins\\.d.*lasx_xvextrins_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmskltz_b:.*xvmskltz\\.b.*lasx_xvmskltz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmskltz_h:.*xvmskltz\\.h.*lasx_xvmskltz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmskltz_w:.*xvmskltz\\.w.*lasx_xvmskltz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmskltz_d:.*xvmskltz\\.d.*lasx_xvmskltz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsigncov_b:.*xvsigncov\\.b.*lasx_xvsigncov_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsigncov_h:.*xvsigncov\\.h.*lasx_xvsigncov_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsigncov_w:.*xvsigncov\\.w.*lasx_xvsigncov_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsigncov_d:.*xvsigncov\\.d.*lasx_xvsigncov_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmadd_s:.*xvfmadd\\.s.*lasx_xvfmadd_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmadd_d:.*xvfmadd\\.d.*lasx_xvfmadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmsub_s:.*xvfmsub\\.s.*lasx_xvfmsub_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfmsub_d:.*xvfmsub\\.d.*lasx_xvfmsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfnmadd_s:.*xvfnmadd\\.s.*lasx_xvfnmadd_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfnmadd_d:.*xvfnmadd\\.d.*lasx_xvfnmadd_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfnmsub_s:.*xvfnmsub\\.s.*lasx_xvfnmsub_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfnmsub_d:.*xvfnmsub\\.d.*lasx_xvfnmsub_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrne_w_s:.*xvftintrne\\.w\\.s.*lasx_xvftintrne_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrne_l_d:.*xvftintrne\\.l\\.d.*lasx_xvftintrne_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrp_w_s:.*xvftintrp\\.w\\.s.*lasx_xvftintrp_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrp_l_d:.*xvftintrp\\.l\\.d.*lasx_xvftintrp_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrm_w_s:.*xvftintrm\\.w\\.s.*lasx_xvftintrm_w_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrm_l_d:.*xvftintrm\\.l\\.d.*lasx_xvftintrm_l_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftint_w_d:.*xvftint\\.w\\.d.*lasx_xvftint_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffint_s_l:.*xvffint\\.s\\.l.*lasx_xvffint_s_l" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrz_w_d:.*xvftintrz\\.w\\.d.*lasx_xvftintrz_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrp_w_d:.*xvftintrp\\.w\\.d.*lasx_xvftintrp_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrm_w_d:.*xvftintrm\\.w\\.d.*lasx_xvftintrm_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrne_w_d:.*xvftintrne\\.w\\.d.*lasx_xvftintrne_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftinth_l_s:.*xvftinth\\.l\\.s.*lasx_xvftinth_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintl_l_s:.*xvftintl\\.l\\.s.*lasx_xvftintl_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffinth_d_w:.*xvffinth\\.d\\.w.*lasx_xvffinth_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvffintl_d_w:.*xvffintl\\.d\\.w.*lasx_xvffintl_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrzh_l_s:.*xvftintrzh\\.l\\.s.*lasx_xvftintrzh_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrzl_l_s:.*xvftintrzl\\.l\\.s.*lasx_xvftintrzl_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrph_l_s:.*xvftintrph\\.l\\.s.*lasx_xvftintrph_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrpl_l_s:.*xvftintrpl\\.l\\.s.*lasx_xvftintrpl_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrmh_l_s:.*xvftintrmh\\.l\\.s.*lasx_xvftintrmh_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrml_l_s:.*xvftintrml\\.l\\.s.*lasx_xvftintrml_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrneh_l_s:.*xvftintrneh\\.l\\.s.*lasx_xvftintrneh_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvftintrnel_l_s:.*xvftintrnel\\.l\\.s.*lasx_xvftintrnel_l_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrne_s:.*xvfrintrne\\.s.*lasx_xvfrintrne_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrne_d:.*xvfrintrne\\.d.*lasx_xvfrintrne_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrz_s:.*xvfrintrz\\.s.*lasx_xvfrintrz_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrz_d:.*xvfrintrz\\.d.*lasx_xvfrintrz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrp_s:.*xvfrintrp\\.s.*lasx_xvfrintrp_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrp_d:.*xvfrintrp\\.d.*lasx_xvfrintrp_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrm_s:.*xvfrintrm\\.s.*lasx_xvfrintrm_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfrintrm_d:.*xvfrintrm\\.d.*lasx_xvfrintrm_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvld:.*xvld.*lasx_xvld" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvst:.*xvst.*lasx_xvst" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvstelm_b:.*xvstelm\\.b.*lasx_xvstelm_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvstelm_h:.*xvstelm\\.h.*lasx_xvstelm_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvstelm_w:.*xvstelm\\.w.*lasx_xvstelm_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvstelm_d:.*xvstelm\\.d.*lasx_xvstelm_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvinsve0_w:.*xvinsve0\\.w.*lasx_xvinsve0_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvinsve0_d:.*xvinsve0\\.d.*lasx_xvinsve0_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve_w:.*xvpickve\\.w.*lasx_xvpickve_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve_d:.*xvpickve\\.d.*lasx_xvpickve_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrn_b_h:.*xvssrlrn\\.b\\.h.*lasx_xvssrlrn_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrn_h_w:.*xvssrlrn\\.h\\.w.*lasx_xvssrlrn_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrn_w_d:.*xvssrlrn\\.w\\.d.*lasx_xvssrlrn_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrln_b_h:.*xvssrln\\.b\\.h.*lasx_xvssrln_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrln_h_w:.*xvssrln\\.h\\.w.*lasx_xvssrln_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrln_w_d:.*xvssrln\\.w\\.d.*lasx_xvssrln_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvorn_v:.*xvorn\\.v.*lasx_xvorn_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvldi:.*xvldi.*lasx_xvldi" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvldx:.*xvldx.*lasx_xvldx" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvstx:.*xvstx.*lasx_xvstx" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvextl_qu_du:.*xvextl\\.qu\\.du.*lasx_xvextl_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvinsgr2vr_w:.*xvinsgr2vr\\.w.*lasx_xvinsgr2vr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvinsgr2vr_d:.*xvinsgr2vr\\.d.*lasx_xvinsgr2vr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve0_b:.*xvreplve0\\.b.*lasx_xvreplve0_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve0_h:.*xvreplve0\\.h.*lasx_xvreplve0_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve0_w:.*xvreplve0\\.w.*lasx_xvreplve0_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve0_d:.*xvreplve0\\.d.*lasx_xvreplve0_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvreplve0_q:.*xvreplve0\\.q.*lasx_xvreplve0_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_h_b:.*vext2xv\\.h\\.b.*lasx_vext2xv_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_w_h:.*vext2xv\\.w\\.h.*lasx_vext2xv_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_d_w:.*vext2xv\\.d\\.w.*lasx_vext2xv_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_w_b:.*vext2xv\\.w\\.b.*lasx_vext2xv_w_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_d_h:.*vext2xv\\.d\\.h.*lasx_vext2xv_d_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_d_b:.*vext2xv\\.d\\.b.*lasx_vext2xv_d_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_hu_bu:.*vext2xv\\.hu\\.bu.*lasx_vext2xv_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_wu_hu:.*vext2xv\\.wu\\.hu.*lasx_vext2xv_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_du_wu:.*vext2xv\\.du\\.wu.*lasx_vext2xv_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_wu_bu:.*vext2xv\\.wu\\.bu.*lasx_vext2xv_wu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_du_hu:.*vext2xv\\.du\\.hu.*lasx_vext2xv_du_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_vext2xv_du_bu:.*vext2xv\\.du\\.bu.*lasx_vext2xv_du_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpermi_q:.*xvpermi\\.q.*lasx_xvpermi_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpermi_d:.*xvpermi\\.d.*lasx_xvpermi_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvperm_w:.*xvperm\\.w.*lasx_xvperm_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvldrepl_b:.*xvldrepl\\.b.*lasx_xvldrepl_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvldrepl_h:.*xvldrepl\\.h.*lasx_xvldrepl_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvldrepl_w:.*xvldrepl\\.w.*lasx_xvldrepl_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvldrepl_d:.*xvldrepl\\.d.*lasx_xvldrepl_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve2gr_w:.*xvpickve2gr\\.w.*lasx_xvpickve2gr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve2gr_wu:.*xvpickve2gr\\.wu.*lasx_xvpickve2gr_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve2gr_d:.*xvpickve2gr\\.d.*lasx_xvpickve2gr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve2gr_du:.*xvpickve2gr\\.du.*lasx_xvpickve2gr_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_q_d:.*xvaddwev\\.q\\.d.*lasx_xvaddwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_d_w:.*xvaddwev\\.d\\.w.*lasx_xvaddwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_w_h:.*xvaddwev\\.w\\.h.*lasx_xvaddwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_h_b:.*xvaddwev\\.h\\.b.*lasx_xvaddwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_q_du:.*xvaddwev\\.q\\.du.*lasx_xvaddwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_d_wu:.*xvaddwev\\.d\\.wu.*lasx_xvaddwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_w_hu:.*xvaddwev\\.w\\.hu.*lasx_xvaddwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_h_bu:.*xvaddwev\\.h\\.bu.*lasx_xvaddwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_q_d:.*xvsubwev\\.q\\.d.*lasx_xvsubwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_d_w:.*xvsubwev\\.d\\.w.*lasx_xvsubwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_w_h:.*xvsubwev\\.w\\.h.*lasx_xvsubwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_h_b:.*xvsubwev\\.h\\.b.*lasx_xvsubwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_q_du:.*xvsubwev\\.q\\.du.*lasx_xvsubwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_d_wu:.*xvsubwev\\.d\\.wu.*lasx_xvsubwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_w_hu:.*xvsubwev\\.w\\.hu.*lasx_xvsubwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwev_h_bu:.*xvsubwev\\.h\\.bu.*lasx_xvsubwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_q_d:.*xvmulwev\\.q\\.d.*lasx_xvmulwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_d_w:.*xvmulwev\\.d\\.w.*lasx_xvmulwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_w_h:.*xvmulwev\\.w\\.h.*lasx_xvmulwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_h_b:.*xvmulwev\\.h\\.b.*lasx_xvmulwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_q_du:.*xvmulwev\\.q\\.du.*lasx_xvmulwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_d_wu:.*xvmulwev\\.d\\.wu.*lasx_xvmulwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_w_hu:.*xvmulwev\\.w\\.hu.*lasx_xvmulwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_h_bu:.*xvmulwev\\.h\\.bu.*lasx_xvmulwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_q_d:.*xvaddwod\\.q\\.d.*lasx_xvaddwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_d_w:.*xvaddwod\\.d\\.w.*lasx_xvaddwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_w_h:.*xvaddwod\\.w\\.h.*lasx_xvaddwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_h_b:.*xvaddwod\\.h\\.b.*lasx_xvaddwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_q_du:.*xvaddwod\\.q\\.du.*lasx_xvaddwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_d_wu:.*xvaddwod\\.d\\.wu.*lasx_xvaddwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_w_hu:.*xvaddwod\\.w\\.hu.*lasx_xvaddwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_h_bu:.*xvaddwod\\.h\\.bu.*lasx_xvaddwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_q_d:.*xvsubwod\\.q\\.d.*lasx_xvsubwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_d_w:.*xvsubwod\\.d\\.w.*lasx_xvsubwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_w_h:.*xvsubwod\\.w\\.h.*lasx_xvsubwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_h_b:.*xvsubwod\\.h\\.b.*lasx_xvsubwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_q_du:.*xvsubwod\\.q\\.du.*lasx_xvsubwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_d_wu:.*xvsubwod\\.d\\.wu.*lasx_xvsubwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_w_hu:.*xvsubwod\\.w\\.hu.*lasx_xvsubwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsubwod_h_bu:.*xvsubwod\\.h\\.bu.*lasx_xvsubwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_q_d:.*xvmulwod\\.q\\.d.*lasx_xvmulwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_d_w:.*xvmulwod\\.d\\.w.*lasx_xvmulwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_w_h:.*xvmulwod\\.w\\.h.*lasx_xvmulwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_h_b:.*xvmulwod\\.h\\.b.*lasx_xvmulwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_q_du:.*xvmulwod\\.q\\.du.*lasx_xvmulwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_d_wu:.*xvmulwod\\.d\\.wu.*lasx_xvmulwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_w_hu:.*xvmulwod\\.w\\.hu.*lasx_xvmulwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_h_bu:.*xvmulwod\\.h\\.bu.*lasx_xvmulwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_d_wu_w:.*xvaddwev\\.d\\.wu\\.w.*lasx_xvaddwev_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_w_hu_h:.*xvaddwev\\.w\\.hu\\.h.*lasx_xvaddwev_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_h_bu_b:.*xvaddwev\\.h\\.bu\\.b.*lasx_xvaddwev_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_d_wu_w:.*xvmulwev\\.d\\.wu\\.w.*lasx_xvmulwev_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_w_hu_h:.*xvmulwev\\.w\\.hu\\.h.*lasx_xvmulwev_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_h_bu_b:.*xvmulwev\\.h\\.bu\\.b.*lasx_xvmulwev_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_d_wu_w:.*xvaddwod\\.d\\.wu\\.w.*lasx_xvaddwod_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_w_hu_h:.*xvaddwod\\.w\\.hu\\.h.*lasx_xvaddwod_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_h_bu_b:.*xvaddwod\\.h\\.bu\\.b.*lasx_xvaddwod_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_d_wu_w:.*xvmulwod\\.d\\.wu\\.w.*lasx_xvmulwod_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_w_hu_h:.*xvmulwod\\.w\\.hu\\.h.*lasx_xvmulwod_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_h_bu_b:.*xvmulwod\\.h\\.bu\\.b.*lasx_xvmulwod_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_q_d:.*xvhaddw\\.q\\.d.*lasx_xvhaddw_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhaddw_qu_du:.*xvhaddw\\.qu\\.du.*lasx_xvhaddw_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_q_d:.*xvhsubw\\.q\\.d.*lasx_xvhsubw_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvhsubw_qu_du:.*xvhsubw\\.qu\\.du.*lasx_xvhsubw_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_q_d:.*xvmaddwev\\.q\\.d.*lasx_xvmaddwev_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_d_w:.*xvmaddwev\\.d\\.w.*lasx_xvmaddwev_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_w_h:.*xvmaddwev\\.w\\.h.*lasx_xvmaddwev_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_h_b:.*xvmaddwev\\.h\\.b.*lasx_xvmaddwev_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_q_du:.*xvmaddwev\\.q\\.du.*lasx_xvmaddwev_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_d_wu:.*xvmaddwev\\.d\\.wu.*lasx_xvmaddwev_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_w_hu:.*xvmaddwev\\.w\\.hu.*lasx_xvmaddwev_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_h_bu:.*xvmaddwev\\.h\\.bu.*lasx_xvmaddwev_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_q_d:.*xvmaddwod\\.q\\.d.*lasx_xvmaddwod_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_d_w:.*xvmaddwod\\.d\\.w.*lasx_xvmaddwod_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_w_h:.*xvmaddwod\\.w\\.h.*lasx_xvmaddwod_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_h_b:.*xvmaddwod\\.h\\.b.*lasx_xvmaddwod_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_q_du:.*xvmaddwod\\.q\\.du.*lasx_xvmaddwod_q_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_d_wu:.*xvmaddwod\\.d\\.wu.*lasx_xvmaddwod_d_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_w_hu:.*xvmaddwod\\.w\\.hu.*lasx_xvmaddwod_w_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_h_bu:.*xvmaddwod\\.h\\.bu.*lasx_xvmaddwod_h_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_q_du_d:.*xvmaddwev\\.q\\.du\\.d.*lasx_xvmaddwev_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_d_wu_w:.*xvmaddwev\\.d\\.wu\\.w.*lasx_xvmaddwev_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_w_hu_h:.*xvmaddwev\\.w\\.hu\\.h.*lasx_xvmaddwev_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwev_h_bu_b:.*xvmaddwev\\.h\\.bu\\.b.*lasx_xvmaddwev_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_q_du_d:.*xvmaddwod\\.q\\.du\\.d.*lasx_xvmaddwod_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_d_wu_w:.*xvmaddwod\\.d\\.wu\\.w.*lasx_xvmaddwod_d_wu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_w_hu_h:.*xvmaddwod\\.w\\.hu\\.h.*lasx_xvmaddwod_w_hu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmaddwod_h_bu_b:.*xvmaddwod\\.h\\.bu\\.b.*lasx_xvmaddwod_h_bu_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotr_b:.*xvrotr\\.b.*lasx_xvrotr_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotr_h:.*xvrotr\\.h.*lasx_xvrotr_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotr_w:.*xvrotr\\.w.*lasx_xvrotr_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotr_d:.*xvrotr\\.d.*lasx_xvrotr_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvadd_q:.*xvadd\\.q.*lasx_xvadd_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsub_q:.*xvsub\\.q.*lasx_xvsub_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwev_q_du_d:.*xvaddwev\\.q\\.du\\.d.*lasx_xvaddwev_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvaddwod_q_du_d:.*xvaddwod\\.q\\.du\\.d.*lasx_xvaddwod_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwev_q_du_d:.*xvmulwev\\.q\\.du\\.d.*lasx_xvmulwev_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmulwod_q_du_d:.*xvmulwod\\.q\\.du\\.d.*lasx_xvmulwod_q_du_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmskgez_b:.*xvmskgez\\.b.*lasx_xvmskgez_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvmsknz_b:.*xvmsknz\\.b.*lasx_xvmsknz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_h_b:.*xvexth\\.h\\.b.*lasx_xvexth_h_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_w_h:.*xvexth\\.w\\.h.*lasx_xvexth_w_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_d_w:.*xvexth\\.d\\.w.*lasx_xvexth_d_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_q_d:.*xvexth\\.q\\.d.*lasx_xvexth_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_hu_bu:.*xvexth\\.hu\\.bu.*lasx_xvexth_hu_bu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_wu_hu:.*xvexth\\.wu\\.hu.*lasx_xvexth_wu_hu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_du_wu:.*xvexth\\.du\\.wu.*lasx_xvexth_du_wu" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvexth_qu_du:.*xvexth\\.qu\\.du.*lasx_xvexth_qu_du" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotri_b:.*xvrotri\\.b.*lasx_xvrotri_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotri_h:.*xvrotri\\.h.*lasx_xvrotri_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotri_w:.*xvrotri\\.w.*lasx_xvrotri_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrotri_d:.*xvrotri\\.d.*lasx_xvrotri_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvextl_q_d:.*xvextl\\.q\\.d.*lasx_xvextl_q_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlni_b_h:.*xvsrlni\\.b\\.h.*lasx_xvsrlni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlni_h_w:.*xvsrlni\\.h\\.w.*lasx_xvsrlni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlni_w_d:.*xvsrlni\\.w\\.d.*lasx_xvsrlni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlni_d_q:.*xvsrlni\\.d\\.q.*lasx_xvsrlni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrni_b_h:.*xvsrlrni\\.b\\.h.*lasx_xvsrlrni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrni_h_w:.*xvsrlrni\\.h\\.w.*lasx_xvsrlrni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrni_w_d:.*xvsrlrni\\.w\\.d.*lasx_xvsrlrni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrlrni_d_q:.*xvsrlrni\\.d\\.q.*lasx_xvsrlrni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_b_h:.*xvssrlni\\.b\\.h.*lasx_xvssrlni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_h_w:.*xvssrlni\\.h\\.w.*lasx_xvssrlni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_w_d:.*xvssrlni\\.w\\.d.*lasx_xvssrlni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_d_q:.*xvssrlni\\.d\\.q.*lasx_xvssrlni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_bu_h:.*xvssrlni\\.bu\\.h.*lasx_xvssrlni_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_hu_w:.*xvssrlni\\.hu\\.w.*lasx_xvssrlni_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_wu_d:.*xvssrlni\\.wu\\.d.*lasx_xvssrlni_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlni_du_q:.*xvssrlni\\.du\\.q.*lasx_xvssrlni_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_b_h:.*xvssrlrni\\.b\\.h.*lasx_xvssrlrni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_h_w:.*xvssrlrni\\.h\\.w.*lasx_xvssrlrni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_w_d:.*xvssrlrni\\.w\\.d.*lasx_xvssrlrni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_d_q:.*xvssrlrni\\.d\\.q.*lasx_xvssrlrni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_bu_h:.*xvssrlrni\\.bu\\.h.*lasx_xvssrlrni_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_hu_w:.*xvssrlrni\\.hu\\.w.*lasx_xvssrlrni_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_wu_d:.*xvssrlrni\\.wu\\.d.*lasx_xvssrlrni_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrlrni_du_q:.*xvssrlrni\\.du\\.q.*lasx_xvssrlrni_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrani_b_h:.*xvsrani\\.b\\.h.*lasx_xvsrani_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrani_h_w:.*xvsrani\\.h\\.w.*lasx_xvsrani_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrani_w_d:.*xvsrani\\.w\\.d.*lasx_xvsrani_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrani_d_q:.*xvsrani\\.d\\.q.*lasx_xvsrani_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarni_b_h:.*xvsrarni\\.b\\.h.*lasx_xvsrarni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarni_h_w:.*xvsrarni\\.h\\.w.*lasx_xvsrarni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarni_w_d:.*xvsrarni\\.w\\.d.*lasx_xvsrarni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvsrarni_d_q:.*xvsrarni\\.d\\.q.*lasx_xvsrarni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_b_h:.*xvssrani\\.b\\.h.*lasx_xvssrani_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_h_w:.*xvssrani\\.h\\.w.*lasx_xvssrani_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_w_d:.*xvssrani\\.w\\.d.*lasx_xvssrani_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_d_q:.*xvssrani\\.d\\.q.*lasx_xvssrani_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_bu_h:.*xvssrani\\.bu\\.h.*lasx_xvssrani_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_hu_w:.*xvssrani\\.hu\\.w.*lasx_xvssrani_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_wu_d:.*xvssrani\\.wu\\.d.*lasx_xvssrani_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrani_du_q:.*xvssrani\\.du\\.q.*lasx_xvssrani_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_b_h:.*xvssrarni\\.b\\.h.*lasx_xvssrarni_b_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_h_w:.*xvssrarni\\.h\\.w.*lasx_xvssrarni_h_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_w_d:.*xvssrarni\\.w\\.d.*lasx_xvssrarni_w_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_d_q:.*xvssrarni\\.d\\.q.*lasx_xvssrarni_d_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_bu_h:.*xvssrarni\\.bu\\.h.*lasx_xvssrarni_bu_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_hu_w:.*xvssrarni\\.hu\\.w.*lasx_xvssrarni_hu_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_wu_d:.*xvssrarni\\.wu\\.d.*lasx_xvssrarni_wu_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvssrarni_du_q:.*xvssrarni\\.du\\.q.*lasx_xvssrarni_du_q" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbnz_b:.*xvsetanyeqz\\.b.*lasx_xbnz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbnz_d:.*xvsetanyeqz\\.d.*lasx_xbnz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbnz_h:.*xvsetanyeqz\\.h.*lasx_xbnz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbnz_v:.*xvseteqz\\.v.*lasx_xbnz_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbnz_w:.*xvsetanyeqz\\.w.*lasx_xbnz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbz_b:.*xvsetallnez\\.b.*lasx_xbz_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbz_d:.*xvsetallnez\\.d.*lasx_xbz_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbz_h:.*xvsetallnez\\.h.*lasx_xbz_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbz_v:.*xvsetnez\\.v.*lasx_xbz_v" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xbz_w:.*xvsetallnez\\.w.*lasx_xbz_w" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_caf_d:.*xvfcmp\\.caf\\.d.*lasx_xvfcmp_caf_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_caf_s:.*xvfcmp\\.caf\\.s.*lasx_xvfcmp_caf_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_ceq_d:.*xvfcmp\\.ceq\\.d.*lasx_xvfcmp_ceq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_ceq_s:.*xvfcmp\\.ceq\\.s.*lasx_xvfcmp_ceq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cle_d:.*xvfcmp\\.cle\\.d.*lasx_xvfcmp_cle_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cle_s:.*xvfcmp\\.cle\\.s.*lasx_xvfcmp_cle_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_clt_d:.*xvfcmp\\.clt\\.d.*lasx_xvfcmp_clt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_clt_s:.*xvfcmp\\.clt\\.s.*lasx_xvfcmp_clt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cne_d:.*xvfcmp\\.cne\\.d.*lasx_xvfcmp_cne_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cne_s:.*xvfcmp\\.cne\\.s.*lasx_xvfcmp_cne_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cor_d:.*xvfcmp\\.cor\\.d.*lasx_xvfcmp_cor_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cor_s:.*xvfcmp\\.cor\\.s.*lasx_xvfcmp_cor_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cueq_d:.*xvfcmp\\.cueq\\.d.*lasx_xvfcmp_cueq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cueq_s:.*xvfcmp\\.cueq\\.s.*lasx_xvfcmp_cueq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cule_d:.*xvfcmp\\.cule\\.d.*lasx_xvfcmp_cule_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cule_s:.*xvfcmp\\.cule\\.s.*lasx_xvfcmp_cule_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cult_d:.*xvfcmp\\.cult\\.d.*lasx_xvfcmp_cult_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cult_s:.*xvfcmp\\.cult\\.s.*lasx_xvfcmp_cult_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cun_d:.*xvfcmp\\.cun\\.d.*lasx_xvfcmp_cun_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cune_d:.*xvfcmp\\.cune\\.d.*lasx_xvfcmp_cune_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cune_s:.*xvfcmp\\.cune\\.s.*lasx_xvfcmp_cune_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_cun_s:.*xvfcmp\\.cun\\.s.*lasx_xvfcmp_cun_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_saf_d:.*xvfcmp\\.saf\\.d.*lasx_xvfcmp_saf_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_saf_s:.*xvfcmp\\.saf\\.s.*lasx_xvfcmp_saf_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_seq_d:.*xvfcmp\\.seq\\.d.*lasx_xvfcmp_seq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_seq_s:.*xvfcmp\\.seq\\.s.*lasx_xvfcmp_seq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sle_d:.*xvfcmp\\.sle\\.d.*lasx_xvfcmp_sle_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sle_s:.*xvfcmp\\.sle\\.s.*lasx_xvfcmp_sle_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_slt_d:.*xvfcmp\\.slt\\.d.*lasx_xvfcmp_slt_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_slt_s:.*xvfcmp\\.slt\\.s.*lasx_xvfcmp_slt_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sne_d:.*xvfcmp\\.sne\\.d.*lasx_xvfcmp_sne_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sne_s:.*xvfcmp\\.sne\\.s.*lasx_xvfcmp_sne_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sor_d:.*xvfcmp\\.sor\\.d.*lasx_xvfcmp_sor_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sor_s:.*xvfcmp\\.sor\\.s.*lasx_xvfcmp_sor_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sueq_d:.*xvfcmp\\.sueq\\.d.*lasx_xvfcmp_sueq_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sueq_s:.*xvfcmp\\.sueq\\.s.*lasx_xvfcmp_sueq_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sule_d:.*xvfcmp\\.sule\\.d.*lasx_xvfcmp_sule_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sule_s:.*xvfcmp\\.sule\\.s.*lasx_xvfcmp_sule_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sult_d:.*xvfcmp\\.sult\\.d.*lasx_xvfcmp_sult_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sult_s:.*xvfcmp\\.sult\\.s.*lasx_xvfcmp_sult_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sun_d:.*xvfcmp\\.sun\\.d.*lasx_xvfcmp_sun_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sune_d:.*xvfcmp\\.sune\\.d.*lasx_xvfcmp_sune_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sune_s:.*xvfcmp\\.sune\\.s.*lasx_xvfcmp_sune_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvfcmp_sun_s:.*xvfcmp\\.sun\\.s.*lasx_xvfcmp_sun_s" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve_d_f:.*xvpickve\\.d.*lasx_xvpickve_d_f" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvpickve_w_f:.*xvpickve\\.w.*lasx_xvpickve_w_f" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepli_b:.*xvrepli\\.b.*lasx_xvrepli_b" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepli_d:.*xvrepli\\.d.*lasx_xvrepli_d" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepli_h:.*xvrepli\\.h.*lasx_xvrepli_h" 1 } } */
+/* { dg-final { scan-assembler-times "lasx_xvrepli_w:.*xvrepli\\.w.*lasx_xvrepli_w" 1 } } */
+
+typedef signed char v32i8 __attribute__ ((vector_size(32), aligned(32)));     
+typedef signed char v32i8_b __attribute__ ((vector_size(32), aligned(1)));    
+typedef unsigned char v32u8 __attribute__ ((vector_size(32), aligned(32)));   
+typedef unsigned char v32u8_b __attribute__ ((vector_size(32), aligned(1)));  
+typedef short v16i16 __attribute__ ((vector_size(32), aligned(32)));          
+typedef short v16i16_h __attribute__ ((vector_size(32), aligned(2)));         
+typedef unsigned short v16u16 __attribute__ ((vector_size(32), aligned(32))); 
+typedef unsigned short v16u16_h __attribute__ ((vector_size(32), aligned(2)));
+typedef int v8i32 __attribute__ ((vector_size(32), aligned(32)));                
+typedef int v8i32_w __attribute__ ((vector_size(32), aligned(4)));               
+typedef unsigned int v8u32 __attribute__ ((vector_size(32), aligned(32)));       
+typedef unsigned int v8u32_w __attribute__ ((vector_size(32), aligned(4)));      
+typedef long long v4i64 __attribute__ ((vector_size(32), aligned(32)));          
+typedef long long v4i64_d __attribute__ ((vector_size(32), aligned(8)));         
+typedef unsigned long long v4u64 __attribute__ ((vector_size(32), aligned(32))); 
+typedef unsigned long long v4u64_d __attribute__ ((vector_size(32), aligned(8)));
+typedef float v8f32 __attribute__ ((vector_size(32), aligned(32)));  
+typedef float v8f32_w __attribute__ ((vector_size(32), aligned(4))); 
+typedef double v4f64 __attribute__ ((vector_size(32), aligned(32))); 
+typedef double v4f64_d __attribute__ ((vector_size(32), aligned(8)));
+                                                                     
+typedef double v4f64 __attribute__ ((vector_size(32), aligned(32))); 
+typedef double v4f64_d __attribute__ ((vector_size(32), aligned(8)));
+
+typedef float __m256 __attribute__ ((__vector_size__ (32), __may_alias__));
+typedef long long __m256i __attribute__ ((__vector_size__ (32), __may_alias__));   
+typedef double __m256d __attribute__ ((__vector_size__ (32), __may_alias__));
+ 
+/* Unaligned version of the same types.  */
+typedef float __m256_u __attribute__ ((__vector_size__ (32), __may_alias__, __aligned__ (1)));
+typedef long long __m256i_u __attribute__ ((__vector_size__ (32), __may_alias__, __aligned__ (1))); 
+typedef double __m256d_u __attribute__ ((__vector_size__ (32), __may_alias__, __aligned__ (1)));
+
+v32i8 __lasx_xvsll_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsll_b(_1, _2);}
+v16i16 __lasx_xvsll_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsll_h(_1, _2);}
+v8i32 __lasx_xvsll_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsll_w(_1, _2);}
+v4i64 __lasx_xvsll_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsll_d(_1, _2);}
+v32i8 __lasx_xvslli_b(v32i8 _1){return __builtin_lasx_xvslli_b(_1, 1);}
+v16i16 __lasx_xvslli_h(v16i16 _1){return __builtin_lasx_xvslli_h(_1, 1);}
+v8i32 __lasx_xvslli_w(v8i32 _1){return __builtin_lasx_xvslli_w(_1, 1);}
+v4i64 __lasx_xvslli_d(v4i64 _1){return __builtin_lasx_xvslli_d(_1, 1);}
+v32i8 __lasx_xvsra_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsra_b(_1, _2);}
+v16i16 __lasx_xvsra_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsra_h(_1, _2);}
+v8i32 __lasx_xvsra_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsra_w(_1, _2);}
+v4i64 __lasx_xvsra_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsra_d(_1, _2);}
+v32i8 __lasx_xvsrai_b(v32i8 _1){return __builtin_lasx_xvsrai_b(_1, 1);}
+v16i16 __lasx_xvsrai_h(v16i16 _1){return __builtin_lasx_xvsrai_h(_1, 1);}
+v8i32 __lasx_xvsrai_w(v8i32 _1){return __builtin_lasx_xvsrai_w(_1, 1);}
+v4i64 __lasx_xvsrai_d(v4i64 _1){return __builtin_lasx_xvsrai_d(_1, 1);}
+v32i8 __lasx_xvsrar_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrar_b(_1, _2);}
+v16i16 __lasx_xvsrar_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrar_h(_1, _2);}
+v8i32 __lasx_xvsrar_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrar_w(_1, _2);}
+v4i64 __lasx_xvsrar_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrar_d(_1, _2);}
+v32i8 __lasx_xvsrari_b(v32i8 _1){return __builtin_lasx_xvsrari_b(_1, 1);}
+v16i16 __lasx_xvsrari_h(v16i16 _1){return __builtin_lasx_xvsrari_h(_1, 1);}
+v8i32 __lasx_xvsrari_w(v8i32 _1){return __builtin_lasx_xvsrari_w(_1, 1);}
+v4i64 __lasx_xvsrari_d(v4i64 _1){return __builtin_lasx_xvsrari_d(_1, 1);}
+v32i8 __lasx_xvsrl_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrl_b(_1, _2);}
+v16i16 __lasx_xvsrl_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrl_h(_1, _2);}
+v8i32 __lasx_xvsrl_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrl_w(_1, _2);}
+v4i64 __lasx_xvsrl_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrl_d(_1, _2);}
+v32i8 __lasx_xvsrli_b(v32i8 _1){return __builtin_lasx_xvsrli_b(_1, 1);}
+v16i16 __lasx_xvsrli_h(v16i16 _1){return __builtin_lasx_xvsrli_h(_1, 1);}
+v8i32 __lasx_xvsrli_w(v8i32 _1){return __builtin_lasx_xvsrli_w(_1, 1);}
+v4i64 __lasx_xvsrli_d(v4i64 _1){return __builtin_lasx_xvsrli_d(_1, 1);}
+v32i8 __lasx_xvsrlr_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrlr_b(_1, _2);}
+v16i16 __lasx_xvsrlr_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrlr_h(_1, _2);}
+v8i32 __lasx_xvsrlr_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrlr_w(_1, _2);}
+v4i64 __lasx_xvsrlr_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrlr_d(_1, _2);}
+v32i8 __lasx_xvsrlri_b(v32i8 _1){return __builtin_lasx_xvsrlri_b(_1, 1);}
+v16i16 __lasx_xvsrlri_h(v16i16 _1){return __builtin_lasx_xvsrlri_h(_1, 1);}
+v8i32 __lasx_xvsrlri_w(v8i32 _1){return __builtin_lasx_xvsrlri_w(_1, 1);}
+v4i64 __lasx_xvsrlri_d(v4i64 _1){return __builtin_lasx_xvsrlri_d(_1, 1);}
+v32u8 __lasx_xvbitclr_b(v32u8 _1, v32u8 _2){return __builtin_lasx_xvbitclr_b(_1, _2);}
+v16u16 __lasx_xvbitclr_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvbitclr_h(_1, _2);}
+v8u32 __lasx_xvbitclr_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvbitclr_w(_1, _2);}
+v4u64 __lasx_xvbitclr_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvbitclr_d(_1, _2);}
+v32u8 __lasx_xvbitclri_b(v32u8 _1){return __builtin_lasx_xvbitclri_b(_1, 1);}
+v16u16 __lasx_xvbitclri_h(v16u16 _1){return __builtin_lasx_xvbitclri_h(_1, 1);}
+v8u32 __lasx_xvbitclri_w(v8u32 _1){return __builtin_lasx_xvbitclri_w(_1, 1);}
+v4u64 __lasx_xvbitclri_d(v4u64 _1){return __builtin_lasx_xvbitclri_d(_1, 1);}
+v32u8 __lasx_xvbitset_b(v32u8 _1, v32u8 _2){return __builtin_lasx_xvbitset_b(_1, _2);}
+v16u16 __lasx_xvbitset_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvbitset_h(_1, _2);}
+v8u32 __lasx_xvbitset_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvbitset_w(_1, _2);}
+v4u64 __lasx_xvbitset_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvbitset_d(_1, _2);}
+v32u8 __lasx_xvbitseti_b(v32u8 _1){return __builtin_lasx_xvbitseti_b(_1, 1);}
+v16u16 __lasx_xvbitseti_h(v16u16 _1){return __builtin_lasx_xvbitseti_h(_1, 1);}
+v8u32 __lasx_xvbitseti_w(v8u32 _1){return __builtin_lasx_xvbitseti_w(_1, 1);}
+v4u64 __lasx_xvbitseti_d(v4u64 _1){return __builtin_lasx_xvbitseti_d(_1, 1);}
+v32u8 __lasx_xvbitrev_b(v32u8 _1, v32u8 _2){return __builtin_lasx_xvbitrev_b(_1, _2);}
+v16u16 __lasx_xvbitrev_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvbitrev_h(_1, _2);}
+v8u32 __lasx_xvbitrev_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvbitrev_w(_1, _2);}
+v4u64 __lasx_xvbitrev_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvbitrev_d(_1, _2);}
+v32u8 __lasx_xvbitrevi_b(v32u8 _1){return __builtin_lasx_xvbitrevi_b(_1, 1);}
+v16u16 __lasx_xvbitrevi_h(v16u16 _1){return __builtin_lasx_xvbitrevi_h(_1, 1);}
+v8u32 __lasx_xvbitrevi_w(v8u32 _1){return __builtin_lasx_xvbitrevi_w(_1, 1);}
+v4u64 __lasx_xvbitrevi_d(v4u64 _1){return __builtin_lasx_xvbitrevi_d(_1, 1);}
+v32i8 __lasx_xvadd_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvadd_b(_1, _2);}
+v16i16 __lasx_xvadd_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvadd_h(_1, _2);}
+v8i32 __lasx_xvadd_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvadd_w(_1, _2);}
+v4i64 __lasx_xvadd_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvadd_d(_1, _2);}
+v32i8 __lasx_xvaddi_bu(v32i8 _1){return __builtin_lasx_xvaddi_bu(_1, 1);}
+v16i16 __lasx_xvaddi_hu(v16i16 _1){return __builtin_lasx_xvaddi_hu(_1, 1);}
+v8i32 __lasx_xvaddi_wu(v8i32 _1){return __builtin_lasx_xvaddi_wu(_1, 1);}
+v4i64 __lasx_xvaddi_du(v4i64 _1){return __builtin_lasx_xvaddi_du(_1, 1);}
+v32i8 __lasx_xvsub_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsub_b(_1, _2);}
+v16i16 __lasx_xvsub_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsub_h(_1, _2);}
+v8i32 __lasx_xvsub_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsub_w(_1, _2);}
+v4i64 __lasx_xvsub_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsub_d(_1, _2);}
+v32i8 __lasx_xvsubi_bu(v32i8 _1){return __builtin_lasx_xvsubi_bu(_1, 1);}
+v16i16 __lasx_xvsubi_hu(v16i16 _1){return __builtin_lasx_xvsubi_hu(_1, 1);}
+v8i32 __lasx_xvsubi_wu(v8i32 _1){return __builtin_lasx_xvsubi_wu(_1, 1);}
+v4i64 __lasx_xvsubi_du(v4i64 _1){return __builtin_lasx_xvsubi_du(_1, 1);}
+v32i8 __lasx_xvmax_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmax_b(_1, _2);}
+v16i16 __lasx_xvmax_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmax_h(_1, _2);}
+v8i32 __lasx_xvmax_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmax_w(_1, _2);}
+v4i64 __lasx_xvmax_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmax_d(_1, _2);}
+v32i8 __lasx_xvmaxi_b(v32i8 _1){return __builtin_lasx_xvmaxi_b(_1, 1);}
+v16i16 __lasx_xvmaxi_h(v16i16 _1){return __builtin_lasx_xvmaxi_h(_1, 1);}
+v8i32 __lasx_xvmaxi_w(v8i32 _1){return __builtin_lasx_xvmaxi_w(_1, 1);}
+v4i64 __lasx_xvmaxi_d(v4i64 _1){return __builtin_lasx_xvmaxi_d(_1, 1);}
+v32u8 __lasx_xvmax_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvmax_bu(_1, _2);}
+v16u16 __lasx_xvmax_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvmax_hu(_1, _2);}
+v8u32 __lasx_xvmax_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvmax_wu(_1, _2);}
+v4u64 __lasx_xvmax_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvmax_du(_1, _2);}
+v32u8 __lasx_xvmaxi_bu(v32u8 _1){return __builtin_lasx_xvmaxi_bu(_1, 1);}
+v16u16 __lasx_xvmaxi_hu(v16u16 _1){return __builtin_lasx_xvmaxi_hu(_1, 1);}
+v8u32 __lasx_xvmaxi_wu(v8u32 _1){return __builtin_lasx_xvmaxi_wu(_1, 1);}
+v4u64 __lasx_xvmaxi_du(v4u64 _1){return __builtin_lasx_xvmaxi_du(_1, 1);}
+v32i8 __lasx_xvmin_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmin_b(_1, _2);}
+v16i16 __lasx_xvmin_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmin_h(_1, _2);}
+v8i32 __lasx_xvmin_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmin_w(_1, _2);}
+v4i64 __lasx_xvmin_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmin_d(_1, _2);}
+v32i8 __lasx_xvmini_b(v32i8 _1){return __builtin_lasx_xvmini_b(_1, 1);}
+v16i16 __lasx_xvmini_h(v16i16 _1){return __builtin_lasx_xvmini_h(_1, 1);}
+v8i32 __lasx_xvmini_w(v8i32 _1){return __builtin_lasx_xvmini_w(_1, 1);}
+v4i64 __lasx_xvmini_d(v4i64 _1){return __builtin_lasx_xvmini_d(_1, 1);}
+v32u8 __lasx_xvmin_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvmin_bu(_1, _2);}
+v16u16 __lasx_xvmin_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvmin_hu(_1, _2);}
+v8u32 __lasx_xvmin_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvmin_wu(_1, _2);}
+v4u64 __lasx_xvmin_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvmin_du(_1, _2);}
+v32u8 __lasx_xvmini_bu(v32u8 _1){return __builtin_lasx_xvmini_bu(_1, 1);}
+v16u16 __lasx_xvmini_hu(v16u16 _1){return __builtin_lasx_xvmini_hu(_1, 1);}
+v8u32 __lasx_xvmini_wu(v8u32 _1){return __builtin_lasx_xvmini_wu(_1, 1);}
+v4u64 __lasx_xvmini_du(v4u64 _1){return __builtin_lasx_xvmini_du(_1, 1);}
+v32i8 __lasx_xvseq_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvseq_b(_1, _2);}
+v16i16 __lasx_xvseq_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvseq_h(_1, _2);}
+v8i32 __lasx_xvseq_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvseq_w(_1, _2);}
+v4i64 __lasx_xvseq_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvseq_d(_1, _2);}
+v32i8 __lasx_xvseqi_b(v32i8 _1){return __builtin_lasx_xvseqi_b(_1, 1);}
+v16i16 __lasx_xvseqi_h(v16i16 _1){return __builtin_lasx_xvseqi_h(_1, 1);}
+v8i32 __lasx_xvseqi_w(v8i32 _1){return __builtin_lasx_xvseqi_w(_1, 1);}
+v4i64 __lasx_xvseqi_d(v4i64 _1){return __builtin_lasx_xvseqi_d(_1, 1);}
+v32i8 __lasx_xvslt_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvslt_b(_1, _2);}
+v16i16 __lasx_xvslt_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvslt_h(_1, _2);}
+v8i32 __lasx_xvslt_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvslt_w(_1, _2);}
+v4i64 __lasx_xvslt_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvslt_d(_1, _2);}
+v32i8 __lasx_xvslti_b(v32i8 _1){return __builtin_lasx_xvslti_b(_1, 1);}
+v16i16 __lasx_xvslti_h(v16i16 _1){return __builtin_lasx_xvslti_h(_1, 1);}
+v8i32 __lasx_xvslti_w(v8i32 _1){return __builtin_lasx_xvslti_w(_1, 1);}
+v4i64 __lasx_xvslti_d(v4i64 _1){return __builtin_lasx_xvslti_d(_1, 1);}
+v32i8 __lasx_xvslt_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvslt_bu(_1, _2);}
+v16i16 __lasx_xvslt_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvslt_hu(_1, _2);}
+v8i32 __lasx_xvslt_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvslt_wu(_1, _2);}
+v4i64 __lasx_xvslt_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvslt_du(_1, _2);}
+v32i8 __lasx_xvslti_bu(v32u8 _1){return __builtin_lasx_xvslti_bu(_1, 1);}
+v16i16 __lasx_xvslti_hu(v16u16 _1){return __builtin_lasx_xvslti_hu(_1, 1);}
+v8i32 __lasx_xvslti_wu(v8u32 _1){return __builtin_lasx_xvslti_wu(_1, 1);}
+v4i64 __lasx_xvslti_du(v4u64 _1){return __builtin_lasx_xvslti_du(_1, 1);}
+v32i8 __lasx_xvsle_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsle_b(_1, _2);}
+v16i16 __lasx_xvsle_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsle_h(_1, _2);}
+v8i32 __lasx_xvsle_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsle_w(_1, _2);}
+v4i64 __lasx_xvsle_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsle_d(_1, _2);}
+v32i8 __lasx_xvslei_b(v32i8 _1){return __builtin_lasx_xvslei_b(_1, 1);}
+v16i16 __lasx_xvslei_h(v16i16 _1){return __builtin_lasx_xvslei_h(_1, 1);}
+v8i32 __lasx_xvslei_w(v8i32 _1){return __builtin_lasx_xvslei_w(_1, 1);}
+v4i64 __lasx_xvslei_d(v4i64 _1){return __builtin_lasx_xvslei_d(_1, 1);}
+v32i8 __lasx_xvsle_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvsle_bu(_1, _2);}
+v16i16 __lasx_xvsle_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvsle_hu(_1, _2);}
+v8i32 __lasx_xvsle_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvsle_wu(_1, _2);}
+v4i64 __lasx_xvsle_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvsle_du(_1, _2);}
+v32i8 __lasx_xvslei_bu(v32u8 _1){return __builtin_lasx_xvslei_bu(_1, 1);}
+v16i16 __lasx_xvslei_hu(v16u16 _1){return __builtin_lasx_xvslei_hu(_1, 1);}
+v8i32 __lasx_xvslei_wu(v8u32 _1){return __builtin_lasx_xvslei_wu(_1, 1);}
+v4i64 __lasx_xvslei_du(v4u64 _1){return __builtin_lasx_xvslei_du(_1, 1);}
+v32i8 __lasx_xvsat_b(v32i8 _1){return __builtin_lasx_xvsat_b(_1, 1);}
+v16i16 __lasx_xvsat_h(v16i16 _1){return __builtin_lasx_xvsat_h(_1, 1);}
+v8i32 __lasx_xvsat_w(v8i32 _1){return __builtin_lasx_xvsat_w(_1, 1);}
+v4i64 __lasx_xvsat_d(v4i64 _1){return __builtin_lasx_xvsat_d(_1, 1);}
+v32u8 __lasx_xvsat_bu(v32u8 _1){return __builtin_lasx_xvsat_bu(_1, 1);}
+v16u16 __lasx_xvsat_hu(v16u16 _1){return __builtin_lasx_xvsat_hu(_1, 1);}
+v8u32 __lasx_xvsat_wu(v8u32 _1){return __builtin_lasx_xvsat_wu(_1, 1);}
+v4u64 __lasx_xvsat_du(v4u64 _1){return __builtin_lasx_xvsat_du(_1, 1);}
+v32i8 __lasx_xvadda_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvadda_b(_1, _2);}
+v16i16 __lasx_xvadda_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvadda_h(_1, _2);}
+v8i32 __lasx_xvadda_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvadda_w(_1, _2);}
+v4i64 __lasx_xvadda_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvadda_d(_1, _2);}
+v32i8 __lasx_xvsadd_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsadd_b(_1, _2);}
+v16i16 __lasx_xvsadd_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsadd_h(_1, _2);}
+v8i32 __lasx_xvsadd_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsadd_w(_1, _2);}
+v4i64 __lasx_xvsadd_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsadd_d(_1, _2);}
+v32u8 __lasx_xvsadd_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvsadd_bu(_1, _2);}
+v16u16 __lasx_xvsadd_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvsadd_hu(_1, _2);}
+v8u32 __lasx_xvsadd_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvsadd_wu(_1, _2);}
+v4u64 __lasx_xvsadd_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvsadd_du(_1, _2);}
+v32i8 __lasx_xvavg_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvavg_b(_1, _2);}
+v16i16 __lasx_xvavg_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvavg_h(_1, _2);}
+v8i32 __lasx_xvavg_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvavg_w(_1, _2);}
+v4i64 __lasx_xvavg_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvavg_d(_1, _2);}
+v32u8 __lasx_xvavg_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvavg_bu(_1, _2);}
+v16u16 __lasx_xvavg_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvavg_hu(_1, _2);}
+v8u32 __lasx_xvavg_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvavg_wu(_1, _2);}
+v4u64 __lasx_xvavg_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvavg_du(_1, _2);}
+v32i8 __lasx_xvavgr_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvavgr_b(_1, _2);}
+v16i16 __lasx_xvavgr_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvavgr_h(_1, _2);}
+v8i32 __lasx_xvavgr_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvavgr_w(_1, _2);}
+v4i64 __lasx_xvavgr_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvavgr_d(_1, _2);}
+v32u8 __lasx_xvavgr_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvavgr_bu(_1, _2);}
+v16u16 __lasx_xvavgr_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvavgr_hu(_1, _2);}
+v8u32 __lasx_xvavgr_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvavgr_wu(_1, _2);}
+v4u64 __lasx_xvavgr_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvavgr_du(_1, _2);}
+v32i8 __lasx_xvssub_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvssub_b(_1, _2);}
+v16i16 __lasx_xvssub_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssub_h(_1, _2);}
+v8i32 __lasx_xvssub_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssub_w(_1, _2);}
+v4i64 __lasx_xvssub_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssub_d(_1, _2);}
+v32u8 __lasx_xvssub_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvssub_bu(_1, _2);}
+v16u16 __lasx_xvssub_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvssub_hu(_1, _2);}
+v8u32 __lasx_xvssub_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvssub_wu(_1, _2);}
+v4u64 __lasx_xvssub_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvssub_du(_1, _2);}
+v32i8 __lasx_xvabsd_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvabsd_b(_1, _2);}
+v16i16 __lasx_xvabsd_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvabsd_h(_1, _2);}
+v8i32 __lasx_xvabsd_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvabsd_w(_1, _2);}
+v4i64 __lasx_xvabsd_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvabsd_d(_1, _2);}
+v32u8 __lasx_xvabsd_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvabsd_bu(_1, _2);}
+v16u16 __lasx_xvabsd_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvabsd_hu(_1, _2);}
+v8u32 __lasx_xvabsd_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvabsd_wu(_1, _2);}
+v4u64 __lasx_xvabsd_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvabsd_du(_1, _2);}
+v32i8 __lasx_xvmul_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmul_b(_1, _2);}
+v16i16 __lasx_xvmul_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmul_h(_1, _2);}
+v8i32 __lasx_xvmul_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmul_w(_1, _2);}
+v4i64 __lasx_xvmul_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmul_d(_1, _2);}
+v32i8 __lasx_xvmadd_b(v32i8 _1, v32i8 _2, v32i8 _3){return __builtin_lasx_xvmadd_b(_1, _2, _3);}
+v16i16 __lasx_xvmadd_h(v16i16 _1, v16i16 _2, v16i16 _3){return __builtin_lasx_xvmadd_h(_1, _2, _3);}
+v8i32 __lasx_xvmadd_w(v8i32 _1, v8i32 _2, v8i32 _3){return __builtin_lasx_xvmadd_w(_1, _2, _3);}
+v4i64 __lasx_xvmadd_d(v4i64 _1, v4i64 _2, v4i64 _3){return __builtin_lasx_xvmadd_d(_1, _2, _3);}
+v32i8 __lasx_xvmsub_b(v32i8 _1, v32i8 _2, v32i8 _3){return __builtin_lasx_xvmsub_b(_1, _2, _3);}
+v16i16 __lasx_xvmsub_h(v16i16 _1, v16i16 _2, v16i16 _3){return __builtin_lasx_xvmsub_h(_1, _2, _3);}
+v8i32 __lasx_xvmsub_w(v8i32 _1, v8i32 _2, v8i32 _3){return __builtin_lasx_xvmsub_w(_1, _2, _3);}
+v4i64 __lasx_xvmsub_d(v4i64 _1, v4i64 _2, v4i64 _3){return __builtin_lasx_xvmsub_d(_1, _2, _3);}
+v32i8 __lasx_xvdiv_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvdiv_b(_1, _2);}
+v16i16 __lasx_xvdiv_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvdiv_h(_1, _2);}
+v8i32 __lasx_xvdiv_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvdiv_w(_1, _2);}
+v4i64 __lasx_xvdiv_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvdiv_d(_1, _2);}
+v32u8 __lasx_xvdiv_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvdiv_bu(_1, _2);}
+v16u16 __lasx_xvdiv_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvdiv_hu(_1, _2);}
+v8u32 __lasx_xvdiv_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvdiv_wu(_1, _2);}
+v4u64 __lasx_xvdiv_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvdiv_du(_1, _2);}
+v16i16 __lasx_xvhaddw_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvhaddw_h_b(_1, _2);}
+v8i32 __lasx_xvhaddw_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvhaddw_w_h(_1, _2);}
+v4i64 __lasx_xvhaddw_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvhaddw_d_w(_1, _2);}
+v16u16 __lasx_xvhaddw_hu_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvhaddw_hu_bu(_1, _2);}
+v8u32 __lasx_xvhaddw_wu_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvhaddw_wu_hu(_1, _2);}
+v4u64 __lasx_xvhaddw_du_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvhaddw_du_wu(_1, _2);}
+v16i16 __lasx_xvhsubw_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvhsubw_h_b(_1, _2);}
+v8i32 __lasx_xvhsubw_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvhsubw_w_h(_1, _2);}
+v4i64 __lasx_xvhsubw_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvhsubw_d_w(_1, _2);}
+v16i16 __lasx_xvhsubw_hu_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvhsubw_hu_bu(_1, _2);}
+v8i32 __lasx_xvhsubw_wu_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvhsubw_wu_hu(_1, _2);}
+v4i64 __lasx_xvhsubw_du_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvhsubw_du_wu(_1, _2);}
+v32i8 __lasx_xvmod_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmod_b(_1, _2);}
+v16i16 __lasx_xvmod_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmod_h(_1, _2);}
+v8i32 __lasx_xvmod_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmod_w(_1, _2);}
+v4i64 __lasx_xvmod_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmod_d(_1, _2);}
+v32u8 __lasx_xvmod_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvmod_bu(_1, _2);}
+v16u16 __lasx_xvmod_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvmod_hu(_1, _2);}
+v8u32 __lasx_xvmod_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvmod_wu(_1, _2);}
+v4u64 __lasx_xvmod_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvmod_du(_1, _2);}
+v32i8 __lasx_xvrepl128vei_b(v32i8 _1){return __builtin_lasx_xvrepl128vei_b(_1, 1);}
+v16i16 __lasx_xvrepl128vei_h(v16i16 _1){return __builtin_lasx_xvrepl128vei_h(_1, 1);}
+v8i32 __lasx_xvrepl128vei_w(v8i32 _1){return __builtin_lasx_xvrepl128vei_w(_1, 1);}
+v4i64 __lasx_xvrepl128vei_d(v4i64 _1){return __builtin_lasx_xvrepl128vei_d(_1, 1);}
+v32i8 __lasx_xvpickev_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvpickev_b(_1, _2);}
+v16i16 __lasx_xvpickev_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvpickev_h(_1, _2);}
+v8i32 __lasx_xvpickev_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvpickev_w(_1, _2);}
+v4i64 __lasx_xvpickev_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvpickev_d(_1, _2);}
+v32i8 __lasx_xvpickod_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvpickod_b(_1, _2);}
+v16i16 __lasx_xvpickod_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvpickod_h(_1, _2);}
+v8i32 __lasx_xvpickod_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvpickod_w(_1, _2);}
+v4i64 __lasx_xvpickod_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvpickod_d(_1, _2);}
+v32i8 __lasx_xvilvh_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvilvh_b(_1, _2);}
+v16i16 __lasx_xvilvh_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvilvh_h(_1, _2);}
+v8i32 __lasx_xvilvh_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvilvh_w(_1, _2);}
+v4i64 __lasx_xvilvh_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvilvh_d(_1, _2);}
+v32i8 __lasx_xvilvl_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvilvl_b(_1, _2);}
+v16i16 __lasx_xvilvl_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvilvl_h(_1, _2);}
+v8i32 __lasx_xvilvl_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvilvl_w(_1, _2);}
+v4i64 __lasx_xvilvl_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvilvl_d(_1, _2);}
+v32i8 __lasx_xvpackev_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvpackev_b(_1, _2);}
+v16i16 __lasx_xvpackev_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvpackev_h(_1, _2);}
+v8i32 __lasx_xvpackev_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvpackev_w(_1, _2);}
+v4i64 __lasx_xvpackev_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvpackev_d(_1, _2);}
+v32i8 __lasx_xvpackod_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvpackod_b(_1, _2);}
+v16i16 __lasx_xvpackod_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvpackod_h(_1, _2);}
+v8i32 __lasx_xvpackod_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvpackod_w(_1, _2);}
+v4i64 __lasx_xvpackod_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvpackod_d(_1, _2);}
+v32i8 __lasx_xvshuf_b(v32i8 _1, v32i8 _2, v32i8 _3){return __builtin_lasx_xvshuf_b(_1, _2, _3);}
+v16i16 __lasx_xvshuf_h(v16i16 _1, v16i16 _2, v16i16 _3){return __builtin_lasx_xvshuf_h(_1, _2, _3);}
+v8i32 __lasx_xvshuf_w(v8i32 _1, v8i32 _2, v8i32 _3){return __builtin_lasx_xvshuf_w(_1, _2, _3);}
+v4i64 __lasx_xvshuf_d(v4i64 _1, v4i64 _2, v4i64 _3){return __builtin_lasx_xvshuf_d(_1, _2, _3);}
+v32u8 __lasx_xvand_v(v32u8 _1, v32u8 _2){return __builtin_lasx_xvand_v(_1, _2);}
+v32u8 __lasx_xvandi_b(v32u8 _1){return __builtin_lasx_xvandi_b(_1, 1);}
+v32u8 __lasx_xvor_v(v32u8 _1, v32u8 _2){return __builtin_lasx_xvor_v(_1, _2);}
+v32u8 __lasx_xvori_b(v32u8 _1){return __builtin_lasx_xvori_b(_1, 1);}
+v32u8 __lasx_xvnor_v(v32u8 _1, v32u8 _2){return __builtin_lasx_xvnor_v(_1, _2);}
+v32u8 __lasx_xvnori_b(v32u8 _1){return __builtin_lasx_xvnori_b(_1, 1);}
+v32u8 __lasx_xvxor_v(v32u8 _1, v32u8 _2){return __builtin_lasx_xvxor_v(_1, _2);}
+v32u8 __lasx_xvxori_b(v32u8 _1){return __builtin_lasx_xvxori_b(_1, 1);}
+v32u8 __lasx_xvbitsel_v(v32u8 _1, v32u8 _2, v32u8 _3){return __builtin_lasx_xvbitsel_v(_1, _2, _3);}
+v32u8 __lasx_xvbitseli_b(v32u8 _1, v32u8 _2){return __builtin_lasx_xvbitseli_b(_1, _2, 1);}
+v32i8 __lasx_xvshuf4i_b(v32i8 _1){return __builtin_lasx_xvshuf4i_b(_1, 1);}
+v16i16 __lasx_xvshuf4i_h(v16i16 _1){return __builtin_lasx_xvshuf4i_h(_1, 1);}
+v8i32 __lasx_xvshuf4i_w(v8i32 _1){return __builtin_lasx_xvshuf4i_w(_1, 1);}
+v32i8 __lasx_xvreplgr2vr_b(int _1){return __builtin_lasx_xvreplgr2vr_b(_1);}
+v16i16 __lasx_xvreplgr2vr_h(int _1){return __builtin_lasx_xvreplgr2vr_h(_1);}
+v8i32 __lasx_xvreplgr2vr_w(int _1){return __builtin_lasx_xvreplgr2vr_w(_1);}
+v4i64 __lasx_xvreplgr2vr_d(int _1){return __builtin_lasx_xvreplgr2vr_d(_1);}
+v32i8 __lasx_xvpcnt_b(v32i8 _1){return __builtin_lasx_xvpcnt_b(_1);}
+v16i16 __lasx_xvpcnt_h(v16i16 _1){return __builtin_lasx_xvpcnt_h(_1);}
+v8i32 __lasx_xvpcnt_w(v8i32 _1){return __builtin_lasx_xvpcnt_w(_1);}
+v4i64 __lasx_xvpcnt_d(v4i64 _1){return __builtin_lasx_xvpcnt_d(_1);}
+v32i8 __lasx_xvclo_b(v32i8 _1){return __builtin_lasx_xvclo_b(_1);}
+v16i16 __lasx_xvclo_h(v16i16 _1){return __builtin_lasx_xvclo_h(_1);}
+v8i32 __lasx_xvclo_w(v8i32 _1){return __builtin_lasx_xvclo_w(_1);}
+v4i64 __lasx_xvclo_d(v4i64 _1){return __builtin_lasx_xvclo_d(_1);}
+v32i8 __lasx_xvclz_b(v32i8 _1){return __builtin_lasx_xvclz_b(_1);}
+v16i16 __lasx_xvclz_h(v16i16 _1){return __builtin_lasx_xvclz_h(_1);}
+v8i32 __lasx_xvclz_w(v8i32 _1){return __builtin_lasx_xvclz_w(_1);}
+v4i64 __lasx_xvclz_d(v4i64 _1){return __builtin_lasx_xvclz_d(_1);}
+v8f32 __lasx_xvfadd_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfadd_s(_1, _2);}
+v4f64 __lasx_xvfadd_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfadd_d(_1, _2);}
+v8f32 __lasx_xvfsub_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfsub_s(_1, _2);}
+v4f64 __lasx_xvfsub_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfsub_d(_1, _2);}
+v8f32 __lasx_xvfmul_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfmul_s(_1, _2);}
+v4f64 __lasx_xvfmul_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfmul_d(_1, _2);}
+v8f32 __lasx_xvfdiv_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfdiv_s(_1, _2);}
+v4f64 __lasx_xvfdiv_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfdiv_d(_1, _2);}
+v16i16 __lasx_xvfcvt_h_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcvt_h_s(_1, _2);}
+v8f32 __lasx_xvfcvt_s_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcvt_s_d(_1, _2);}
+v8f32 __lasx_xvfmin_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfmin_s(_1, _2);}
+v4f64 __lasx_xvfmin_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfmin_d(_1, _2);}
+v8f32 __lasx_xvfmina_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfmina_s(_1, _2);}
+v4f64 __lasx_xvfmina_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfmina_d(_1, _2);}
+v8f32 __lasx_xvfmax_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfmax_s(_1, _2);}
+v4f64 __lasx_xvfmax_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfmax_d(_1, _2);}
+v8f32 __lasx_xvfmaxa_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfmaxa_s(_1, _2);}
+v4f64 __lasx_xvfmaxa_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfmaxa_d(_1, _2);}
+v8i32 __lasx_xvfclass_s(v8f32 _1){return __builtin_lasx_xvfclass_s(_1);}
+v4i64 __lasx_xvfclass_d(v4f64 _1){return __builtin_lasx_xvfclass_d(_1);}
+v8f32 __lasx_xvfsqrt_s(v8f32 _1){return __builtin_lasx_xvfsqrt_s(_1);}
+v4f64 __lasx_xvfsqrt_d(v4f64 _1){return __builtin_lasx_xvfsqrt_d(_1);}
+v8f32 __lasx_xvfrecip_s(v8f32 _1){return __builtin_lasx_xvfrecip_s(_1);}
+v4f64 __lasx_xvfrecip_d(v4f64 _1){return __builtin_lasx_xvfrecip_d(_1);}
+v8f32 __lasx_xvfrint_s(v8f32 _1){return __builtin_lasx_xvfrint_s(_1);}
+v4f64 __lasx_xvfrint_d(v4f64 _1){return __builtin_lasx_xvfrint_d(_1);}
+v8f32 __lasx_xvfrsqrt_s(v8f32 _1){return __builtin_lasx_xvfrsqrt_s(_1);}
+v4f64 __lasx_xvfrsqrt_d(v4f64 _1){return __builtin_lasx_xvfrsqrt_d(_1);}
+v8f32 __lasx_xvflogb_s(v8f32 _1){return __builtin_lasx_xvflogb_s(_1);}
+v4f64 __lasx_xvflogb_d(v4f64 _1){return __builtin_lasx_xvflogb_d(_1);}
+v8f32 __lasx_xvfcvth_s_h(v16i16 _1){return __builtin_lasx_xvfcvth_s_h(_1);}
+v4f64 __lasx_xvfcvth_d_s(v8f32 _1){return __builtin_lasx_xvfcvth_d_s(_1);}
+v8f32 __lasx_xvfcvtl_s_h(v16i16 _1){return __builtin_lasx_xvfcvtl_s_h(_1);}
+v4f64 __lasx_xvfcvtl_d_s(v8f32 _1){return __builtin_lasx_xvfcvtl_d_s(_1);}
+v8i32 __lasx_xvftint_w_s(v8f32 _1){return __builtin_lasx_xvftint_w_s(_1);}
+v4i64 __lasx_xvftint_l_d(v4f64 _1){return __builtin_lasx_xvftint_l_d(_1);}
+v8u32 __lasx_xvftint_wu_s(v8f32 _1){return __builtin_lasx_xvftint_wu_s(_1);}
+v4u64 __lasx_xvftint_lu_d(v4f64 _1){return __builtin_lasx_xvftint_lu_d(_1);}
+v8i32 __lasx_xvftintrz_w_s(v8f32 _1){return __builtin_lasx_xvftintrz_w_s(_1);}
+v4i64 __lasx_xvftintrz_l_d(v4f64 _1){return __builtin_lasx_xvftintrz_l_d(_1);}
+v8u32 __lasx_xvftintrz_wu_s(v8f32 _1){return __builtin_lasx_xvftintrz_wu_s(_1);}
+v4u64 __lasx_xvftintrz_lu_d(v4f64 _1){return __builtin_lasx_xvftintrz_lu_d(_1);}
+v8f32 __lasx_xvffint_s_w(v8i32 _1){return __builtin_lasx_xvffint_s_w(_1);}
+v4f64 __lasx_xvffint_d_l(v4i64 _1){return __builtin_lasx_xvffint_d_l(_1);}
+v8f32 __lasx_xvffint_s_wu(v8u32 _1){return __builtin_lasx_xvffint_s_wu(_1);}
+v4f64 __lasx_xvffint_d_lu(v4u64 _1){return __builtin_lasx_xvffint_d_lu(_1);}
+v32i8 __lasx_xvreplve_b(v32i8 _1, int _2){return __builtin_lasx_xvreplve_b(_1, _2);}
+v16i16 __lasx_xvreplve_h(v16i16 _1, int _2){return __builtin_lasx_xvreplve_h(_1, _2);}
+v8i32 __lasx_xvreplve_w(v8i32 _1, int _2){return __builtin_lasx_xvreplve_w(_1, _2);}
+v4i64 __lasx_xvreplve_d(v4i64 _1, int _2){return __builtin_lasx_xvreplve_d(_1, _2);}
+v8i32 __lasx_xvpermi_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvpermi_w(_1, _2, 1);}
+v32u8 __lasx_xvandn_v(v32u8 _1, v32u8 _2){return __builtin_lasx_xvandn_v(_1, _2);}
+v32i8 __lasx_xvneg_b(v32i8 _1){return __builtin_lasx_xvneg_b(_1);}
+v16i16 __lasx_xvneg_h(v16i16 _1){return __builtin_lasx_xvneg_h(_1);}
+v8i32 __lasx_xvneg_w(v8i32 _1){return __builtin_lasx_xvneg_w(_1);}
+v4i64 __lasx_xvneg_d(v4i64 _1){return __builtin_lasx_xvneg_d(_1);}
+v32i8 __lasx_xvmuh_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmuh_b(_1, _2);}
+v16i16 __lasx_xvmuh_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmuh_h(_1, _2);}
+v8i32 __lasx_xvmuh_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmuh_w(_1, _2);}
+v4i64 __lasx_xvmuh_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmuh_d(_1, _2);}
+v32u8 __lasx_xvmuh_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvmuh_bu(_1, _2);}
+v16u16 __lasx_xvmuh_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvmuh_hu(_1, _2);}
+v8u32 __lasx_xvmuh_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvmuh_wu(_1, _2);}
+v4u64 __lasx_xvmuh_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvmuh_du(_1, _2);}
+v16i16 __lasx_xvsllwil_h_b(v32i8 _1){return __builtin_lasx_xvsllwil_h_b(_1, 1);}
+v8i32 __lasx_xvsllwil_w_h(v16i16 _1){return __builtin_lasx_xvsllwil_w_h(_1, 1);}
+v4i64 __lasx_xvsllwil_d_w(v8i32 _1){return __builtin_lasx_xvsllwil_d_w(_1, 1);}
+v16u16 __lasx_xvsllwil_hu_bu(v32u8 _1){return __builtin_lasx_xvsllwil_hu_bu(_1, 1);}
+v8u32 __lasx_xvsllwil_wu_hu(v16u16 _1){return __builtin_lasx_xvsllwil_wu_hu(_1, 1);}
+v4u64 __lasx_xvsllwil_du_wu(v8u32 _1){return __builtin_lasx_xvsllwil_du_wu(_1, 1);}
+v32i8 __lasx_xvsran_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsran_b_h(_1, _2);}
+v16i16 __lasx_xvsran_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsran_h_w(_1, _2);}
+v8i32 __lasx_xvsran_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsran_w_d(_1, _2);}
+v32i8 __lasx_xvssran_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssran_b_h(_1, _2);}
+v16i16 __lasx_xvssran_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssran_h_w(_1, _2);}
+v8i32 __lasx_xvssran_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssran_w_d(_1, _2);}
+v32u8 __lasx_xvssran_bu_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvssran_bu_h(_1, _2);}
+v16u16 __lasx_xvssran_hu_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvssran_hu_w(_1, _2);}
+v8u32 __lasx_xvssran_wu_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvssran_wu_d(_1, _2);}
+v32i8 __lasx_xvsrarn_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrarn_b_h(_1, _2);}
+v16i16 __lasx_xvsrarn_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrarn_h_w(_1, _2);}
+v8i32 __lasx_xvsrarn_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrarn_w_d(_1, _2);}
+v32i8 __lasx_xvssrarn_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrarn_b_h(_1, _2);}
+v16i16 __lasx_xvssrarn_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrarn_h_w(_1, _2);}
+v8i32 __lasx_xvssrarn_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrarn_w_d(_1, _2);}
+v32u8 __lasx_xvssrarn_bu_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvssrarn_bu_h(_1, _2);}
+v16u16 __lasx_xvssrarn_hu_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvssrarn_hu_w(_1, _2);}
+v8u32 __lasx_xvssrarn_wu_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvssrarn_wu_d(_1, _2);}
+v32i8 __lasx_xvsrln_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrln_b_h(_1, _2);}
+v16i16 __lasx_xvsrln_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrln_h_w(_1, _2);}
+v8i32 __lasx_xvsrln_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrln_w_d(_1, _2);}
+v32u8 __lasx_xvssrln_bu_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvssrln_bu_h(_1, _2);}
+v16u16 __lasx_xvssrln_hu_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvssrln_hu_w(_1, _2);}
+v8u32 __lasx_xvssrln_wu_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvssrln_wu_d(_1, _2);}
+v32i8 __lasx_xvsrlrn_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrlrn_b_h(_1, _2);}
+v16i16 __lasx_xvsrlrn_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrlrn_h_w(_1, _2);}
+v8i32 __lasx_xvsrlrn_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrlrn_w_d(_1, _2);}
+v32u8 __lasx_xvssrlrn_bu_h(v16u16 _1, v16u16 _2){return __builtin_lasx_xvssrlrn_bu_h(_1, _2);}
+v16u16 __lasx_xvssrlrn_hu_w(v8u32 _1, v8u32 _2){return __builtin_lasx_xvssrlrn_hu_w(_1, _2);}
+v8u32 __lasx_xvssrlrn_wu_d(v4u64 _1, v4u64 _2){return __builtin_lasx_xvssrlrn_wu_d(_1, _2);}
+v32i8 __lasx_xvfrstpi_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvfrstpi_b(_1, _2, 1);}
+v16i16 __lasx_xvfrstpi_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvfrstpi_h(_1, _2, 1);}
+v32i8 __lasx_xvfrstp_b(v32i8 _1, v32i8 _2, v32i8 _3){return __builtin_lasx_xvfrstp_b(_1, _2, _3);}
+v16i16 __lasx_xvfrstp_h(v16i16 _1, v16i16 _2, v16i16 _3){return __builtin_lasx_xvfrstp_h(_1, _2, _3);}
+v4i64 __lasx_xvshuf4i_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvshuf4i_d(_1, _2, 1);}
+v32i8 __lasx_xvbsrl_v(v32i8 _1){return __builtin_lasx_xvbsrl_v(_1, 1);}
+v32i8 __lasx_xvbsll_v(v32i8 _1){return __builtin_lasx_xvbsll_v(_1, 1);}
+v32i8 __lasx_xvextrins_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvextrins_b(_1, _2, 1);}
+v16i16 __lasx_xvextrins_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvextrins_h(_1, _2, 1);}
+v8i32 __lasx_xvextrins_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvextrins_w(_1, _2, 1);}
+v4i64 __lasx_xvextrins_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvextrins_d(_1, _2, 1);}
+v32i8 __lasx_xvmskltz_b(v32i8 _1){return __builtin_lasx_xvmskltz_b(_1);}
+v16i16 __lasx_xvmskltz_h(v16i16 _1){return __builtin_lasx_xvmskltz_h(_1);}
+v8i32 __lasx_xvmskltz_w(v8i32 _1){return __builtin_lasx_xvmskltz_w(_1);}
+v4i64 __lasx_xvmskltz_d(v4i64 _1){return __builtin_lasx_xvmskltz_d(_1);}
+v32i8 __lasx_xvsigncov_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsigncov_b(_1, _2);}
+v16i16 __lasx_xvsigncov_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsigncov_h(_1, _2);}
+v8i32 __lasx_xvsigncov_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsigncov_w(_1, _2);}
+v4i64 __lasx_xvsigncov_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsigncov_d(_1, _2);}
+v8f32 __lasx_xvfmadd_s(v8f32 _1, v8f32 _2, v8f32 _3){return __builtin_lasx_xvfmadd_s(_1, _2, _3);}
+v4f64 __lasx_xvfmadd_d(v4f64 _1, v4f64 _2, v4f64 _3){return __builtin_lasx_xvfmadd_d(_1, _2, _3);}
+v8f32 __lasx_xvfmsub_s(v8f32 _1, v8f32 _2, v8f32 _3){return __builtin_lasx_xvfmsub_s(_1, _2, _3);}
+v4f64 __lasx_xvfmsub_d(v4f64 _1, v4f64 _2, v4f64 _3){return __builtin_lasx_xvfmsub_d(_1, _2, _3);}
+v8f32 __lasx_xvfnmadd_s(v8f32 _1, v8f32 _2, v8f32 _3){return __builtin_lasx_xvfnmadd_s(_1, _2, _3);}
+v4f64 __lasx_xvfnmadd_d(v4f64 _1, v4f64 _2, v4f64 _3){return __builtin_lasx_xvfnmadd_d(_1, _2, _3);}
+v8f32 __lasx_xvfnmsub_s(v8f32 _1, v8f32 _2, v8f32 _3){return __builtin_lasx_xvfnmsub_s(_1, _2, _3);}
+v4f64 __lasx_xvfnmsub_d(v4f64 _1, v4f64 _2, v4f64 _3){return __builtin_lasx_xvfnmsub_d(_1, _2, _3);}
+v8i32 __lasx_xvftintrne_w_s(v8f32 _1){return __builtin_lasx_xvftintrne_w_s(_1);}
+v4i64 __lasx_xvftintrne_l_d(v4f64 _1){return __builtin_lasx_xvftintrne_l_d(_1);}
+v8i32 __lasx_xvftintrp_w_s(v8f32 _1){return __builtin_lasx_xvftintrp_w_s(_1);}
+v4i64 __lasx_xvftintrp_l_d(v4f64 _1){return __builtin_lasx_xvftintrp_l_d(_1);}
+v8i32 __lasx_xvftintrm_w_s(v8f32 _1){return __builtin_lasx_xvftintrm_w_s(_1);}
+v4i64 __lasx_xvftintrm_l_d(v4f64 _1){return __builtin_lasx_xvftintrm_l_d(_1);}
+v8i32 __lasx_xvftint_w_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvftint_w_d(_1, _2);}
+v8f32 __lasx_xvffint_s_l(v4i64 _1, v4i64 _2){return __builtin_lasx_xvffint_s_l(_1, _2);}
+v8i32 __lasx_xvftintrz_w_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvftintrz_w_d(_1, _2);}
+v8i32 __lasx_xvftintrp_w_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvftintrp_w_d(_1, _2);}
+v8i32 __lasx_xvftintrm_w_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvftintrm_w_d(_1, _2);}
+v8i32 __lasx_xvftintrne_w_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvftintrne_w_d(_1, _2);}
+v4i64 __lasx_xvftinth_l_s(v8f32 _1){return __builtin_lasx_xvftinth_l_s(_1);}
+v4i64 __lasx_xvftintl_l_s(v8f32 _1){return __builtin_lasx_xvftintl_l_s(_1);}
+v4f64 __lasx_xvffinth_d_w(v8i32 _1){return __builtin_lasx_xvffinth_d_w(_1);}
+v4f64 __lasx_xvffintl_d_w(v8i32 _1){return __builtin_lasx_xvffintl_d_w(_1);}
+v4i64 __lasx_xvftintrzh_l_s(v8f32 _1){return __builtin_lasx_xvftintrzh_l_s(_1);}
+v4i64 __lasx_xvftintrzl_l_s(v8f32 _1){return __builtin_lasx_xvftintrzl_l_s(_1);}
+v4i64 __lasx_xvftintrph_l_s(v8f32 _1){return __builtin_lasx_xvftintrph_l_s(_1);}
+v4i64 __lasx_xvftintrpl_l_s(v8f32 _1){return __builtin_lasx_xvftintrpl_l_s(_1);}
+v4i64 __lasx_xvftintrmh_l_s(v8f32 _1){return __builtin_lasx_xvftintrmh_l_s(_1);}
+v4i64 __lasx_xvftintrml_l_s(v8f32 _1){return __builtin_lasx_xvftintrml_l_s(_1);}
+v4i64 __lasx_xvftintrneh_l_s(v8f32 _1){return __builtin_lasx_xvftintrneh_l_s(_1);}
+v4i64 __lasx_xvftintrnel_l_s(v8f32 _1){return __builtin_lasx_xvftintrnel_l_s(_1);}
+v8f32 __lasx_xvfrintrne_s(v8f32 _1){return __builtin_lasx_xvfrintrne_s(_1);}
+v4f64 __lasx_xvfrintrne_d(v4f64 _1){return __builtin_lasx_xvfrintrne_d(_1);}
+v8f32 __lasx_xvfrintrz_s(v8f32 _1){return __builtin_lasx_xvfrintrz_s(_1);}
+v4f64 __lasx_xvfrintrz_d(v4f64 _1){return __builtin_lasx_xvfrintrz_d(_1);}
+v8f32 __lasx_xvfrintrp_s(v8f32 _1){return __builtin_lasx_xvfrintrp_s(_1);}
+v4f64 __lasx_xvfrintrp_d(v4f64 _1){return __builtin_lasx_xvfrintrp_d(_1);}
+v8f32 __lasx_xvfrintrm_s(v8f32 _1){return __builtin_lasx_xvfrintrm_s(_1);}
+v4f64 __lasx_xvfrintrm_d(v4f64 _1){return __builtin_lasx_xvfrintrm_d(_1);}
+v32i8 __lasx_xvld(void * _1){return __builtin_lasx_xvld(_1, 1);}
+void __lasx_xvst(v32i8 _1, void * _2){return __builtin_lasx_xvst(_1, _2, 1);}
+void __lasx_xvstelm_b(v32i8 _1, void * _2){return __builtin_lasx_xvstelm_b(_1, _2, 1, 1);}
+void __lasx_xvstelm_h(v16i16 _1, void * _2){return __builtin_lasx_xvstelm_h(_1, _2, 2, 1);}
+void __lasx_xvstelm_w(v8i32 _1, void * _2){return __builtin_lasx_xvstelm_w(_1, _2, 4, 1);}
+void __lasx_xvstelm_d(v4i64 _1, void * _2){return __builtin_lasx_xvstelm_d(_1, _2, 8, 1);}
+v8i32 __lasx_xvinsve0_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvinsve0_w(_1, _2, 1);}
+v4i64 __lasx_xvinsve0_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvinsve0_d(_1, _2, 1);}
+v8i32 __lasx_xvpickve_w(v8i32 _1){return __builtin_lasx_xvpickve_w(_1, 1);}
+v4i64 __lasx_xvpickve_d(v4i64 _1){return __builtin_lasx_xvpickve_d(_1, 1);}
+v32i8 __lasx_xvssrlrn_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrlrn_b_h(_1, _2);}
+v16i16 __lasx_xvssrlrn_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrlrn_h_w(_1, _2);}
+v8i32 __lasx_xvssrlrn_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrlrn_w_d(_1, _2);}
+v32i8 __lasx_xvssrln_b_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrln_b_h(_1, _2);}
+v16i16 __lasx_xvssrln_h_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrln_h_w(_1, _2);}
+v8i32 __lasx_xvssrln_w_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrln_w_d(_1, _2);}
+v32i8 __lasx_xvorn_v(v32i8 _1, v32i8 _2){return __builtin_lasx_xvorn_v(_1, _2);}
+v4i64 __lasx_xvldi(){return __builtin_lasx_xvldi(1);}
+v32i8 __lasx_xvldx(void * _1){return __builtin_lasx_xvldx(_1, 1);}
+void __lasx_xvstx(v32i8 _1, void * _2){return __builtin_lasx_xvstx(_1, _2, 1);}
+v4u64 __lasx_xvextl_qu_du(v4u64 _1){return __builtin_lasx_xvextl_qu_du(_1);}
+v8i32 __lasx_xvinsgr2vr_w(v8i32 _1){return __builtin_lasx_xvinsgr2vr_w(_1, 1, 1);}
+v4i64 __lasx_xvinsgr2vr_d(v4i64 _1){return __builtin_lasx_xvinsgr2vr_d(_1, 1, 1);}
+v32i8 __lasx_xvreplve0_b(v32i8 _1){return __builtin_lasx_xvreplve0_b(_1);}
+v16i16 __lasx_xvreplve0_h(v16i16 _1){return __builtin_lasx_xvreplve0_h(_1);}
+v8i32 __lasx_xvreplve0_w(v8i32 _1){return __builtin_lasx_xvreplve0_w(_1);}
+v4i64 __lasx_xvreplve0_d(v4i64 _1){return __builtin_lasx_xvreplve0_d(_1);}
+v32i8 __lasx_xvreplve0_q(v32i8 _1){return __builtin_lasx_xvreplve0_q(_1);}
+v16i16 __lasx_vext2xv_h_b(v32i8 _1){return __builtin_lasx_vext2xv_h_b(_1);}
+v8i32 __lasx_vext2xv_w_h(v16i16 _1){return __builtin_lasx_vext2xv_w_h(_1);}
+v4i64 __lasx_vext2xv_d_w(v8i32 _1){return __builtin_lasx_vext2xv_d_w(_1);}
+v8i32 __lasx_vext2xv_w_b(v32i8 _1){return __builtin_lasx_vext2xv_w_b(_1);}
+v4i64 __lasx_vext2xv_d_h(v16i16 _1){return __builtin_lasx_vext2xv_d_h(_1);}
+v4i64 __lasx_vext2xv_d_b(v32i8 _1){return __builtin_lasx_vext2xv_d_b(_1);}
+v16i16 __lasx_vext2xv_hu_bu(v32i8 _1){return __builtin_lasx_vext2xv_hu_bu(_1);}
+v8i32 __lasx_vext2xv_wu_hu(v16i16 _1){return __builtin_lasx_vext2xv_wu_hu(_1);}
+v4i64 __lasx_vext2xv_du_wu(v8i32 _1){return __builtin_lasx_vext2xv_du_wu(_1);}
+v8i32 __lasx_vext2xv_wu_bu(v32i8 _1){return __builtin_lasx_vext2xv_wu_bu(_1);}
+v4i64 __lasx_vext2xv_du_hu(v16i16 _1){return __builtin_lasx_vext2xv_du_hu(_1);}
+v4i64 __lasx_vext2xv_du_bu(v32i8 _1){return __builtin_lasx_vext2xv_du_bu(_1);}
+v32i8 __lasx_xvpermi_q(v32i8 _1, v32i8 _2){return __builtin_lasx_xvpermi_q(_1, _2, 1);}
+v4i64 __lasx_xvpermi_d(v4i64 _1){return __builtin_lasx_xvpermi_d(_1, 1);}
+v8i32 __lasx_xvperm_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvperm_w(_1, _2);}
+v32i8 __lasx_xvldrepl_b(void * _1){return __builtin_lasx_xvldrepl_b(_1, 1);}
+v16i16 __lasx_xvldrepl_h(void * _1){return __builtin_lasx_xvldrepl_h(_1, 2);}
+v8i32 __lasx_xvldrepl_w(void * _1){return __builtin_lasx_xvldrepl_w(_1, 4);}
+v4i64 __lasx_xvldrepl_d(void * _1){return __builtin_lasx_xvldrepl_d(_1, 8);}
+int __lasx_xvpickve2gr_w(v8i32 _1){return __builtin_lasx_xvpickve2gr_w(_1, 1);}
+unsigned int __lasx_xvpickve2gr_wu(v8i32 _1){return __builtin_lasx_xvpickve2gr_wu(_1, 1);}
+long __lasx_xvpickve2gr_d(v4i64 _1){return __builtin_lasx_xvpickve2gr_d(_1, 1);}
+unsigned long int __lasx_xvpickve2gr_du(v4i64 _1){return __builtin_lasx_xvpickve2gr_du(_1, 1);}
+v4i64 __lasx_xvaddwev_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvaddwev_q_d(_1, _2);}
+v4i64 __lasx_xvaddwev_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvaddwev_d_w(_1, _2);}
+v8i32 __lasx_xvaddwev_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvaddwev_w_h(_1, _2);}
+v16i16 __lasx_xvaddwev_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvaddwev_h_b(_1, _2);}
+v4i64 __lasx_xvaddwev_q_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvaddwev_q_du(_1, _2);}
+v4i64 __lasx_xvaddwev_d_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvaddwev_d_wu(_1, _2);}
+v8i32 __lasx_xvaddwev_w_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvaddwev_w_hu(_1, _2);}
+v16i16 __lasx_xvaddwev_h_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvaddwev_h_bu(_1, _2);}
+v4i64 __lasx_xvsubwev_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsubwev_q_d(_1, _2);}
+v4i64 __lasx_xvsubwev_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsubwev_d_w(_1, _2);}
+v8i32 __lasx_xvsubwev_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsubwev_w_h(_1, _2);}
+v16i16 __lasx_xvsubwev_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsubwev_h_b(_1, _2);}
+v4i64 __lasx_xvsubwev_q_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvsubwev_q_du(_1, _2);}
+v4i64 __lasx_xvsubwev_d_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvsubwev_d_wu(_1, _2);}
+v8i32 __lasx_xvsubwev_w_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvsubwev_w_hu(_1, _2);}
+v16i16 __lasx_xvsubwev_h_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvsubwev_h_bu(_1, _2);}
+v4i64 __lasx_xvmulwev_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmulwev_q_d(_1, _2);}
+v4i64 __lasx_xvmulwev_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmulwev_d_w(_1, _2);}
+v8i32 __lasx_xvmulwev_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmulwev_w_h(_1, _2);}
+v16i16 __lasx_xvmulwev_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmulwev_h_b(_1, _2);}
+v4i64 __lasx_xvmulwev_q_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvmulwev_q_du(_1, _2);}
+v4i64 __lasx_xvmulwev_d_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvmulwev_d_wu(_1, _2);}
+v8i32 __lasx_xvmulwev_w_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvmulwev_w_hu(_1, _2);}
+v16i16 __lasx_xvmulwev_h_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvmulwev_h_bu(_1, _2);}
+v4i64 __lasx_xvaddwod_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvaddwod_q_d(_1, _2);}
+v4i64 __lasx_xvaddwod_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvaddwod_d_w(_1, _2);}
+v8i32 __lasx_xvaddwod_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvaddwod_w_h(_1, _2);}
+v16i16 __lasx_xvaddwod_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvaddwod_h_b(_1, _2);}
+v4i64 __lasx_xvaddwod_q_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvaddwod_q_du(_1, _2);}
+v4i64 __lasx_xvaddwod_d_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvaddwod_d_wu(_1, _2);}
+v8i32 __lasx_xvaddwod_w_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvaddwod_w_hu(_1, _2);}
+v16i16 __lasx_xvaddwod_h_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvaddwod_h_bu(_1, _2);}
+v4i64 __lasx_xvsubwod_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsubwod_q_d(_1, _2);}
+v4i64 __lasx_xvsubwod_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsubwod_d_w(_1, _2);}
+v8i32 __lasx_xvsubwod_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsubwod_w_h(_1, _2);}
+v16i16 __lasx_xvsubwod_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsubwod_h_b(_1, _2);}
+v4i64 __lasx_xvsubwod_q_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvsubwod_q_du(_1, _2);}
+v4i64 __lasx_xvsubwod_d_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvsubwod_d_wu(_1, _2);}
+v8i32 __lasx_xvsubwod_w_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvsubwod_w_hu(_1, _2);}
+v16i16 __lasx_xvsubwod_h_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvsubwod_h_bu(_1, _2);}
+v4i64 __lasx_xvmulwod_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvmulwod_q_d(_1, _2);}
+v4i64 __lasx_xvmulwod_d_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvmulwod_d_w(_1, _2);}
+v8i32 __lasx_xvmulwod_w_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvmulwod_w_h(_1, _2);}
+v16i16 __lasx_xvmulwod_h_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvmulwod_h_b(_1, _2);}
+v4i64 __lasx_xvmulwod_q_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvmulwod_q_du(_1, _2);}
+v4i64 __lasx_xvmulwod_d_wu(v8u32 _1, v8u32 _2){return __builtin_lasx_xvmulwod_d_wu(_1, _2);}
+v8i32 __lasx_xvmulwod_w_hu(v16u16 _1, v16u16 _2){return __builtin_lasx_xvmulwod_w_hu(_1, _2);}
+v16i16 __lasx_xvmulwod_h_bu(v32u8 _1, v32u8 _2){return __builtin_lasx_xvmulwod_h_bu(_1, _2);}
+v4i64 __lasx_xvaddwev_d_wu_w(v8u32 _1, v8i32 _2){return __builtin_lasx_xvaddwev_d_wu_w(_1, _2);}
+v8i32 __lasx_xvaddwev_w_hu_h(v16u16 _1, v16i16 _2){return __builtin_lasx_xvaddwev_w_hu_h(_1, _2);}
+v16i16 __lasx_xvaddwev_h_bu_b(v32u8 _1, v32i8 _2){return __builtin_lasx_xvaddwev_h_bu_b(_1, _2);}
+v4i64 __lasx_xvmulwev_d_wu_w(v8u32 _1, v8i32 _2){return __builtin_lasx_xvmulwev_d_wu_w(_1, _2);}
+v8i32 __lasx_xvmulwev_w_hu_h(v16u16 _1, v16i16 _2){return __builtin_lasx_xvmulwev_w_hu_h(_1, _2);}
+v16i16 __lasx_xvmulwev_h_bu_b(v32u8 _1, v32i8 _2){return __builtin_lasx_xvmulwev_h_bu_b(_1, _2);}
+v4i64 __lasx_xvaddwod_d_wu_w(v8u32 _1, v8i32 _2){return __builtin_lasx_xvaddwod_d_wu_w(_1, _2);}
+v8i32 __lasx_xvaddwod_w_hu_h(v16u16 _1, v16i16 _2){return __builtin_lasx_xvaddwod_w_hu_h(_1, _2);}
+v16i16 __lasx_xvaddwod_h_bu_b(v32u8 _1, v32i8 _2){return __builtin_lasx_xvaddwod_h_bu_b(_1, _2);}
+v4i64 __lasx_xvmulwod_d_wu_w(v8u32 _1, v8i32 _2){return __builtin_lasx_xvmulwod_d_wu_w(_1, _2);}
+v8i32 __lasx_xvmulwod_w_hu_h(v16u16 _1, v16i16 _2){return __builtin_lasx_xvmulwod_w_hu_h(_1, _2);}
+v16i16 __lasx_xvmulwod_h_bu_b(v32u8 _1, v32i8 _2){return __builtin_lasx_xvmulwod_h_bu_b(_1, _2);}
+v4i64 __lasx_xvhaddw_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvhaddw_q_d(_1, _2);}
+v4u64 __lasx_xvhaddw_qu_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvhaddw_qu_du(_1, _2);}
+v4i64 __lasx_xvhsubw_q_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvhsubw_q_d(_1, _2);}
+v4u64 __lasx_xvhsubw_qu_du(v4u64 _1, v4u64 _2){return __builtin_lasx_xvhsubw_qu_du(_1, _2);}
+v4i64 __lasx_xvmaddwev_q_d(v4i64 _1, v4i64 _2, v4i64 _3){return __builtin_lasx_xvmaddwev_q_d(_1, _2, _3);}
+v4i64 __lasx_xvmaddwev_d_w(v4i64 _1, v8i32 _2, v8i32 _3){return __builtin_lasx_xvmaddwev_d_w(_1, _2, _3);}
+v8i32 __lasx_xvmaddwev_w_h(v8i32 _1, v16i16 _2, v16i16 _3){return __builtin_lasx_xvmaddwev_w_h(_1, _2, _3);}
+v16i16 __lasx_xvmaddwev_h_b(v16i16 _1, v32i8 _2, v32i8 _3){return __builtin_lasx_xvmaddwev_h_b(_1, _2, _3);}
+v4u64 __lasx_xvmaddwev_q_du(v4u64 _1, v4u64 _2, v4u64 _3){return __builtin_lasx_xvmaddwev_q_du(_1, _2, _3);}
+v4u64 __lasx_xvmaddwev_d_wu(v4u64 _1, v8u32 _2, v8u32 _3){return __builtin_lasx_xvmaddwev_d_wu(_1, _2, _3);}
+v8u32 __lasx_xvmaddwev_w_hu(v8u32 _1, v16u16 _2, v16u16 _3){return __builtin_lasx_xvmaddwev_w_hu(_1, _2, _3);}
+v16u16 __lasx_xvmaddwev_h_bu(v16u16 _1, v32u8 _2, v32u8 _3){return __builtin_lasx_xvmaddwev_h_bu(_1, _2, _3);}
+v4i64 __lasx_xvmaddwod_q_d(v4i64 _1, v4i64 _2, v4i64 _3){return __builtin_lasx_xvmaddwod_q_d(_1, _2, _3);}
+v4i64 __lasx_xvmaddwod_d_w(v4i64 _1, v8i32 _2, v8i32 _3){return __builtin_lasx_xvmaddwod_d_w(_1, _2, _3);}
+v8i32 __lasx_xvmaddwod_w_h(v8i32 _1, v16i16 _2, v16i16 _3){return __builtin_lasx_xvmaddwod_w_h(_1, _2, _3);}
+v16i16 __lasx_xvmaddwod_h_b(v16i16 _1, v32i8 _2, v32i8 _3){return __builtin_lasx_xvmaddwod_h_b(_1, _2, _3);}
+v4u64 __lasx_xvmaddwod_q_du(v4u64 _1, v4u64 _2, v4u64 _3){return __builtin_lasx_xvmaddwod_q_du(_1, _2, _3);}
+v4u64 __lasx_xvmaddwod_d_wu(v4u64 _1, v8u32 _2, v8u32 _3){return __builtin_lasx_xvmaddwod_d_wu(_1, _2, _3);}
+v8u32 __lasx_xvmaddwod_w_hu(v8u32 _1, v16u16 _2, v16u16 _3){return __builtin_lasx_xvmaddwod_w_hu(_1, _2, _3);}
+v16u16 __lasx_xvmaddwod_h_bu(v16u16 _1, v32u8 _2, v32u8 _3){return __builtin_lasx_xvmaddwod_h_bu(_1, _2, _3);}
+v4i64 __lasx_xvmaddwev_q_du_d(v4i64 _1, v4u64 _2, v4i64 _3){return __builtin_lasx_xvmaddwev_q_du_d(_1, _2, _3);}
+v4i64 __lasx_xvmaddwev_d_wu_w(v4i64 _1, v8u32 _2, v8i32 _3){return __builtin_lasx_xvmaddwev_d_wu_w(_1, _2, _3);}
+v8i32 __lasx_xvmaddwev_w_hu_h(v8i32 _1, v16u16 _2, v16i16 _3){return __builtin_lasx_xvmaddwev_w_hu_h(_1, _2, _3);}
+v16i16 __lasx_xvmaddwev_h_bu_b(v16i16 _1, v32u8 _2, v32i8 _3){return __builtin_lasx_xvmaddwev_h_bu_b(_1, _2, _3);}
+v4i64 __lasx_xvmaddwod_q_du_d(v4i64 _1, v4u64 _2, v4i64 _3){return __builtin_lasx_xvmaddwod_q_du_d(_1, _2, _3);}
+v4i64 __lasx_xvmaddwod_d_wu_w(v4i64 _1, v8u32 _2, v8i32 _3){return __builtin_lasx_xvmaddwod_d_wu_w(_1, _2, _3);}
+v8i32 __lasx_xvmaddwod_w_hu_h(v8i32 _1, v16u16 _2, v16i16 _3){return __builtin_lasx_xvmaddwod_w_hu_h(_1, _2, _3);}
+v16i16 __lasx_xvmaddwod_h_bu_b(v16i16 _1, v32u8 _2, v32i8 _3){return __builtin_lasx_xvmaddwod_h_bu_b(_1, _2, _3);}
+v32i8 __lasx_xvrotr_b(v32i8 _1, v32i8 _2){return __builtin_lasx_xvrotr_b(_1, _2);}
+v16i16 __lasx_xvrotr_h(v16i16 _1, v16i16 _2){return __builtin_lasx_xvrotr_h(_1, _2);}
+v8i32 __lasx_xvrotr_w(v8i32 _1, v8i32 _2){return __builtin_lasx_xvrotr_w(_1, _2);}
+v4i64 __lasx_xvrotr_d(v4i64 _1, v4i64 _2){return __builtin_lasx_xvrotr_d(_1, _2);}
+v4i64 __lasx_xvadd_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvadd_q(_1, _2);}
+v4i64 __lasx_xvsub_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsub_q(_1, _2);}
+v4i64 __lasx_xvaddwev_q_du_d(v4u64 _1, v4i64 _2){return __builtin_lasx_xvaddwev_q_du_d(_1, _2);}
+v4i64 __lasx_xvaddwod_q_du_d(v4u64 _1, v4i64 _2){return __builtin_lasx_xvaddwod_q_du_d(_1, _2);}
+v4i64 __lasx_xvmulwev_q_du_d(v4u64 _1, v4i64 _2){return __builtin_lasx_xvmulwev_q_du_d(_1, _2);}
+v4i64 __lasx_xvmulwod_q_du_d(v4u64 _1, v4i64 _2){return __builtin_lasx_xvmulwod_q_du_d(_1, _2);}
+v32i8 __lasx_xvmskgez_b(v32i8 _1){return __builtin_lasx_xvmskgez_b(_1);}
+v32i8 __lasx_xvmsknz_b(v32i8 _1){return __builtin_lasx_xvmsknz_b(_1);}
+v16i16 __lasx_xvexth_h_b(v32i8 _1){return __builtin_lasx_xvexth_h_b(_1);}
+v8i32 __lasx_xvexth_w_h(v16i16 _1){return __builtin_lasx_xvexth_w_h(_1);}
+v4i64 __lasx_xvexth_d_w(v8i32 _1){return __builtin_lasx_xvexth_d_w(_1);}
+v4i64 __lasx_xvexth_q_d(v4i64 _1){return __builtin_lasx_xvexth_q_d(_1);}
+v16u16 __lasx_xvexth_hu_bu(v32u8 _1){return __builtin_lasx_xvexth_hu_bu(_1);}
+v8u32 __lasx_xvexth_wu_hu(v16u16 _1){return __builtin_lasx_xvexth_wu_hu(_1);}
+v4u64 __lasx_xvexth_du_wu(v8u32 _1){return __builtin_lasx_xvexth_du_wu(_1);}
+v4u64 __lasx_xvexth_qu_du(v4u64 _1){return __builtin_lasx_xvexth_qu_du(_1);}
+v32i8 __lasx_xvrotri_b(v32i8 _1){return __builtin_lasx_xvrotri_b(_1, 1);}
+v16i16 __lasx_xvrotri_h(v16i16 _1){return __builtin_lasx_xvrotri_h(_1, 1);}
+v8i32 __lasx_xvrotri_w(v8i32 _1){return __builtin_lasx_xvrotri_w(_1, 1);}
+v4i64 __lasx_xvrotri_d(v4i64 _1){return __builtin_lasx_xvrotri_d(_1, 1);}
+v4i64 __lasx_xvextl_q_d(v4i64 _1){return __builtin_lasx_xvextl_q_d(_1);}
+v32i8 __lasx_xvsrlni_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrlni_b_h(_1, _2, 1);}
+v16i16 __lasx_xvsrlni_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrlni_h_w(_1, _2, 1);}
+v8i32 __lasx_xvsrlni_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrlni_w_d(_1, _2, 1);}
+v4i64 __lasx_xvsrlni_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrlni_d_q(_1, _2, 1);}
+v32i8 __lasx_xvsrlrni_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrlrni_b_h(_1, _2, 1);}
+v16i16 __lasx_xvsrlrni_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrlrni_h_w(_1, _2, 1);}
+v8i32 __lasx_xvsrlrni_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrlrni_w_d(_1, _2, 1);}
+v4i64 __lasx_xvsrlrni_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrlrni_d_q(_1, _2, 1);}
+v32i8 __lasx_xvssrlni_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvssrlni_b_h(_1, _2, 1);}
+v16i16 __lasx_xvssrlni_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrlni_h_w(_1, _2, 1);}
+v8i32 __lasx_xvssrlni_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrlni_w_d(_1, _2, 1);}
+v4i64 __lasx_xvssrlni_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrlni_d_q(_1, _2, 1);}
+v32u8 __lasx_xvssrlni_bu_h(v32u8 _1, v32i8 _2){return __builtin_lasx_xvssrlni_bu_h(_1, _2, 1);}
+v16u16 __lasx_xvssrlni_hu_w(v16u16 _1, v16i16 _2){return __builtin_lasx_xvssrlni_hu_w(_1, _2, 1);}
+v8u32 __lasx_xvssrlni_wu_d(v8u32 _1, v8i32 _2){return __builtin_lasx_xvssrlni_wu_d(_1, _2, 1);}
+v4u64 __lasx_xvssrlni_du_q(v4u64 _1, v4i64 _2){return __builtin_lasx_xvssrlni_du_q(_1, _2, 1);}
+v32i8 __lasx_xvssrlrni_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvssrlrni_b_h(_1, _2, 1);}
+v16i16 __lasx_xvssrlrni_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrlrni_h_w(_1, _2, 1);}
+v8i32 __lasx_xvssrlrni_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrlrni_w_d(_1, _2, 1);}
+v4i64 __lasx_xvssrlrni_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrlrni_d_q(_1, _2, 1);}
+v32u8 __lasx_xvssrlrni_bu_h(v32u8 _1, v32i8 _2){return __builtin_lasx_xvssrlrni_bu_h(_1, _2, 1);}
+v16u16 __lasx_xvssrlrni_hu_w(v16u16 _1, v16i16 _2){return __builtin_lasx_xvssrlrni_hu_w(_1, _2, 1);}
+v8u32 __lasx_xvssrlrni_wu_d(v8u32 _1, v8i32 _2){return __builtin_lasx_xvssrlrni_wu_d(_1, _2, 1);}
+v4u64 __lasx_xvssrlrni_du_q(v4u64 _1, v4i64 _2){return __builtin_lasx_xvssrlrni_du_q(_1, _2, 1);}
+v32i8 __lasx_xvsrani_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrani_b_h(_1, _2, 1);}
+v16i16 __lasx_xvsrani_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrani_h_w(_1, _2, 1);}
+v8i32 __lasx_xvsrani_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrani_w_d(_1, _2, 1);}
+v4i64 __lasx_xvsrani_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrani_d_q(_1, _2, 1);}
+v32i8 __lasx_xvsrarni_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvsrarni_b_h(_1, _2, 1);}
+v16i16 __lasx_xvsrarni_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvsrarni_h_w(_1, _2, 1);}
+v8i32 __lasx_xvsrarni_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvsrarni_w_d(_1, _2, 1);}
+v4i64 __lasx_xvsrarni_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvsrarni_d_q(_1, _2, 1);}
+v32i8 __lasx_xvssrani_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvssrani_b_h(_1, _2, 1);}
+v16i16 __lasx_xvssrani_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrani_h_w(_1, _2, 1);}
+v8i32 __lasx_xvssrani_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrani_w_d(_1, _2, 1);}
+v4i64 __lasx_xvssrani_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrani_d_q(_1, _2, 1);}
+v32u8 __lasx_xvssrani_bu_h(v32u8 _1, v32i8 _2){return __builtin_lasx_xvssrani_bu_h(_1, _2, 1);}
+v16u16 __lasx_xvssrani_hu_w(v16u16 _1, v16i16 _2){return __builtin_lasx_xvssrani_hu_w(_1, _2, 1);}
+v8u32 __lasx_xvssrani_wu_d(v8u32 _1, v8i32 _2){return __builtin_lasx_xvssrani_wu_d(_1, _2, 1);}
+v4u64 __lasx_xvssrani_du_q(v4u64 _1, v4i64 _2){return __builtin_lasx_xvssrani_du_q(_1, _2, 1);}
+v32i8 __lasx_xvssrarni_b_h(v32i8 _1, v32i8 _2){return __builtin_lasx_xvssrarni_b_h(_1, _2, 1);}
+v16i16 __lasx_xvssrarni_h_w(v16i16 _1, v16i16 _2){return __builtin_lasx_xvssrarni_h_w(_1, _2, 1);}
+v8i32 __lasx_xvssrarni_w_d(v8i32 _1, v8i32 _2){return __builtin_lasx_xvssrarni_w_d(_1, _2, 1);}
+v4i64 __lasx_xvssrarni_d_q(v4i64 _1, v4i64 _2){return __builtin_lasx_xvssrarni_d_q(_1, _2, 1);}
+v32u8 __lasx_xvssrarni_bu_h(v32u8 _1, v32i8 _2){return __builtin_lasx_xvssrarni_bu_h(_1, _2, 1);}
+v16u16 __lasx_xvssrarni_hu_w(v16u16 _1, v16i16 _2){return __builtin_lasx_xvssrarni_hu_w(_1, _2, 1);}
+v8u32 __lasx_xvssrarni_wu_d(v8u32 _1, v8i32 _2){return __builtin_lasx_xvssrarni_wu_d(_1, _2, 1);}
+v4u64 __lasx_xvssrarni_du_q(v4u64 _1, v4i64 _2){return __builtin_lasx_xvssrarni_du_q(_1, _2, 1);}
+int __lasx_xbnz_b(v32u8 _1){return __builtin_lasx_xbnz_b(_1);}
+int __lasx_xbnz_d(v4u64 _1){return __builtin_lasx_xbnz_d(_1);}
+int __lasx_xbnz_h(v16u16 _1){return __builtin_lasx_xbnz_h(_1);}
+int __lasx_xbnz_v(v32u8 _1){return __builtin_lasx_xbnz_v(_1);}
+int __lasx_xbnz_w(v8u32 _1){return __builtin_lasx_xbnz_w(_1);}
+int __lasx_xbz_b(v32u8 _1){return __builtin_lasx_xbz_b(_1);}
+int __lasx_xbz_d(v4u64 _1){return __builtin_lasx_xbz_d(_1);}
+int __lasx_xbz_h(v16u16 _1){return __builtin_lasx_xbz_h(_1);}
+int __lasx_xbz_v(v32u8 _1){return __builtin_lasx_xbz_v(_1);}
+int __lasx_xbz_w(v8u32 _1){return __builtin_lasx_xbz_w(_1);}
+v4i64 __lasx_xvfcmp_caf_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_caf_d(_1, _2);}
+v8i32 __lasx_xvfcmp_caf_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_caf_s(_1, _2);}
+v4i64 __lasx_xvfcmp_ceq_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_ceq_d(_1, _2);}
+v8i32 __lasx_xvfcmp_ceq_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_ceq_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cle_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cle_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cle_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cle_s(_1, _2);}
+v4i64 __lasx_xvfcmp_clt_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_clt_d(_1, _2);}
+v8i32 __lasx_xvfcmp_clt_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_clt_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cne_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cne_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cne_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cne_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cor_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cor_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cor_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cor_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cueq_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cueq_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cueq_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cueq_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cule_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cule_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cule_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cule_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cult_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cult_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cult_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cult_s(_1, _2);}
+v4i64 __lasx_xvfcmp_cun_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cun_d(_1, _2);}
+v4i64 __lasx_xvfcmp_cune_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_cune_d(_1, _2);}
+v8i32 __lasx_xvfcmp_cune_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cune_s(_1, _2);}
+v8i32 __lasx_xvfcmp_cun_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_cun_s(_1, _2);}
+v4i64 __lasx_xvfcmp_saf_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_saf_d(_1, _2);}
+v8i32 __lasx_xvfcmp_saf_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_saf_s(_1, _2);}
+v4i64 __lasx_xvfcmp_seq_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_seq_d(_1, _2);}
+v8i32 __lasx_xvfcmp_seq_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_seq_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sle_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sle_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sle_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sle_s(_1, _2);}
+v4i64 __lasx_xvfcmp_slt_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_slt_d(_1, _2);}
+v8i32 __lasx_xvfcmp_slt_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_slt_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sne_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sne_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sne_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sne_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sor_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sor_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sor_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sor_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sueq_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sueq_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sueq_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sueq_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sule_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sule_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sule_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sule_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sult_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sult_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sult_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sult_s(_1, _2);}
+v4i64 __lasx_xvfcmp_sun_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sun_d(_1, _2);}
+v4i64 __lasx_xvfcmp_sune_d(v4f64 _1, v4f64 _2){return __builtin_lasx_xvfcmp_sune_d(_1, _2);}
+v8i32 __lasx_xvfcmp_sune_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sune_s(_1, _2);}
+v8i32 __lasx_xvfcmp_sun_s(v8f32 _1, v8f32 _2){return __builtin_lasx_xvfcmp_sun_s(_1, _2);}
+v4f64 __lasx_xvpickve_d_f(v4f64 _1){return __builtin_lasx_xvpickve_d_f(_1, 1);}
+v8f32 __lasx_xvpickve_w_f(v8f32 _1){return __builtin_lasx_xvpickve_w_f(_1, 1);}
+v32i8 __lasx_xvrepli_b(){return __builtin_lasx_xvrepli_b(1);}
+v4i64 __lasx_xvrepli_d(){return __builtin_lasx_xvrepli_d(1);}
+v16i16 __lasx_xvrepli_h(){return __builtin_lasx_xvrepli_h(1);}
+v8i32 __lasx_xvrepli_w(){return __builtin_lasx_xvrepli_w(1);}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-cmp.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-cmp.c
new file mode 100644
index 00000000000..8358d9e0aef
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-cmp.c
@@ -0,0 +1,5361 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100020001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff000000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000095120000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc9da000063f50000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc7387fff6bbfffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f0000007f000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f0000007f000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1555156a1555156a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1555156a1555156a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1555156a1555156a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1555156a1555156a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ff00007fff9fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0209fefb08140000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ffff00ff000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcf800fffcf800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff00fffffff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00fffffff0;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000005be55bd2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffcc8000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007dfdff4b;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff00ffffff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ff00ff00;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4ffc3f783fc040c0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3fc03f803fc040c0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4ffc3f783fc040c0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3fc03f803fc040c0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffee0000004c0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff050000ff3c0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00f9000000780000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffa80000ff310000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff000000ff000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff000000ff000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000005500000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001005500020000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000005500000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001005500020000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffefff7f00100080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffefff7f00100080;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff01fb0408;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff01fb0408;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff00ffffffff;
+  __m256i_out = __lasx_xvseq_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0080000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000501ffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000701ffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000501ffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000701ffffce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000260a378;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000d02317;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000260a378;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000d02317;
+  *((unsigned long*)& __m256i_op1[3]) = 0x003f020001400200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x003f00ff003f00c4;
+  *((unsigned long*)& __m256i_op1[1]) = 0x003f020001400200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x003f00ff003f00c4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseq_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffdfe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffdfe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_d(__m256i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00b200b300800080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00b200b300800080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_d(__m256i_op0,14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_d(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00197d3200197d56;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00197d3200197d56;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_h(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000bdfef907bc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000bdfef907bc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_d(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_b(__m256i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_d(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_d(__m256i_op0,14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800fffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvseqi_w(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000460086;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007f0079;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000f30028;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000df00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbf28b0686066be60;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffff00ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ffffff00ff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ffffffffffffff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010201010204;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010102;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff00ff00ffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff000000ff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffff00ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000ff00ff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000083f95466;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010100005400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007f00f8ff7fff80;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fff6a9d8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007f00f8ff7fff80;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff6a9d8;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fe0100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fe0100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000feb60000b7d0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000feb60000c7eb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000feb60000b7d0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000feb60000c7eb;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ffffff00ffff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8011ffae800c000c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00baff050083ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80b900b980380038;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0017ffa8008eff31;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00020421d7d41124;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00020421d7d41124;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000180007f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffafaf80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000180007f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffafaf80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00197d3200197d56;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00197d3200197d56;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffbfffa0ffffff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffbfffa0ffffff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbfffa004fffd8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbfffa004fffd8000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ffff0000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ffff0000ff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvsle_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff0008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff0008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00f7000000f70007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00f7000000f70007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000003ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000003ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffee;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff01fffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01fffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff010ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff010ff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fff000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff00;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ffe0001fffe0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ffe0001fffeffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000fdfdfe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000002d;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc02dc02dc02dc02d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000002d;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc02dc02dc02dc02d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb683007ffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c0df5b41cf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb683007ffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c0df5b41cf;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001497c98ea4fca;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001497c98ea4fca;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x94d7fb5200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ffffffffffffff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2aaaaa85aaaaaa85;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2aaa48f4aaaa48f4;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2aaaaa85aaaaaa85;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2aaa48f4aaaa48f4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvsle_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffffffffffff;
+  __m256i_out = __lasx_xvsle_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsle_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsle_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1828f0e09bad7249;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07ffc1b723953cec;
+  *((unsigned long*)& __m256i_op0[1]) = 0x61f2e9b333aab104;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6bf742aa0d7856a0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000460086;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007f0079;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000f30028;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000df00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfc2f3183ef7ffff7;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_du(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00217f19ffde80e6;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00037f94fffc806b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00217f19ffde80e6;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00037f94fffc806b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000000000000;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_du(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x014200c200c200ae;
+  *((unsigned long*)& __m256i_op0[2]) = 0x014200c200c200ae;
+  *((unsigned long*)& __m256i_op0[1]) = 0x014200c200c200ae;
+  *((unsigned long*)& __m256i_op0[0]) = 0x014200c200c200ae;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc6c6c6c68787878a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8787878a00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff874dc687870000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000101ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00010013000100fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00010013000100fb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_du(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_du(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op0[0]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffff00;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000007ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x03802fc000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x03802fc000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op0[2]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op0[0]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04e8296f3c611818;
+  *((unsigned long*)& __m256i_op0[2]) = 0x032eafee29010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x04e8296f3c611818;
+  *((unsigned long*)& __m256i_op0[0]) = 0x032eafee29010000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000ffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000ffffff;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslei_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffff8900000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff8900000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000000000000;
+  __m256i_out = __lasx_xvslei_h(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_d(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_b(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslei_wu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_hu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslei_du(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1828f0e09bad7249;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07ffc1b723953cec;
+  *((unsigned long*)& __m256i_op0[1]) = 0x61f2e9b333aab104;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6bf742aa0d7856a0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0d41c9a7bdd239a7;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0b025d0ef8fdf987;
+  *((unsigned long*)& __m256i_op1[1]) = 0x002944f92da5a708;
+  *((unsigned long*)& __m256i_op1[0]) = 0x038cf4ea999922ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff0000ffff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xff000000ffffff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffffffff00ff;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001175f10e4330e8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff8f0842ff29211e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffff8d9ffa7103d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvslt_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdb801b6d0962003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdb8a3109fe0f0024;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9a7f997fff01ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbe632a4f1c3c5653;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022222221;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3dddddddfbbb3bbc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000022222221;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3dddddddfbbb3bbc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000500000005;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202031;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202031;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbc74c3d108e05422;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbc1e3e6a5cace67c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbc74c3d108e0544a;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbc18e696a86565f4;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbc74c3d108e05422;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbc1e3e6a5cace67c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbc74c3d108e0544a;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbc18e696a86565f4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ff000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100007fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100007fff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_op1[1]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5252525252525252;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef87878000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef87878000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000003fbfc04;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001fdfe02;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000003fbfc04;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001fdfe02;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000040b200002fd4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007fff0000739c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000040b200002fd4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fff0000739c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf7f8f7f8f7f8f7f8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf7f8f7f8f7f8f7f8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001400000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001400000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010511c54440437;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010511c54440437;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0df9f8e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0df9f8e;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffe0df9f8e;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffe0df9f8e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000002000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000002000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff0000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff0000ffffffff;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000860601934;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000860601934;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvslt_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000017f00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007f7f03030000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslt_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffffffff;
+  __m256i_out = __lasx_xvslt_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc58a0a0a07070706;
+  *((unsigned long*)& __m256i_op0[2]) = 0x006b60e4180b0023;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1b39153f334b966a;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf1d75d79efcac002;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_hu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007773;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000003373;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000045000d0005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000045000d0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x80000000001529c1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80007073cadc3779;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80000000001529c1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80007073cadc3779;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffffffffffff;
+  __m256i_out = __lasx_xvslti_hu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_hu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_hu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x009f00f8007e00f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f007f0081007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00220021004a007e;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ff00ff00ff00;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffb3b4;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff5ffff4738;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffb3b4;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff5ffff4738;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_hu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffe06003fc000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffe06003fc000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,-3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000e0e0e0e0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_d(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000047000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000047000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_b(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvslti_hu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_bu(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvslti_wu(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_du(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvslti_w(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvslti_h(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitsel_v(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitsel_v(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001f001f02c442af;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001f001f02c442af;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe01f000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe01f000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_result[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000c40086;
+  __m256i_out = __lasx_xvbitsel_v(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbe21000100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000505300000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbe21000100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000505300000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00005053000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00005053000000ff;
+  __m256i_out = __lasx_xvbitsel_v(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000040000;
+  __m256i_out = __lasx_xvbitsel_v(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvbitsel_v(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0xef);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0xcd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffd10000006459;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000441000000004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000040400000104;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdb801b6d0962003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdb8a3109fe0f0024;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000007fff01ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdb8e209d0cce025a;
+  *((unsigned long*)& __m256i_result[3]) = 0x88888a6d0962002e;
+  *((unsigned long*)& __m256i_result[2]) = 0xdb8a3109fe0f0020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000007fff01fffb;
+  *((unsigned long*)& __m256i_result[0]) = 0xdb8e20990cce025a;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0x88);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000002b902b3e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000002b902b3e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000002a102a3a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000002a102a3a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0x3a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0xd9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000090909090;
+  *((unsigned long*)& __m256i_result[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000090909090;
+  *((unsigned long*)& __m256i_result[0]) = 0x9090909090909090;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0x95);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5555555555555555;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_result[2]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_result[1]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_result[0]) = 0x4545454545454545;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0x4d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x21bb481000ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01bf481000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x21bb481000ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01bf481000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xb1b3b1b1b1b7b1b1;
+  *((unsigned long*)& __m256i_result[2]) = 0xb1b7b1b1b1b1b1b1;
+  *((unsigned long*)& __m256i_result[1]) = 0xb1b3b1b1b1b7b1b1;
+  *((unsigned long*)& __m256i_result[0]) = 0xb1b7b1b1b1b1b1b1;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0xb7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc03fc03fc03fc03f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc03fc03fc03fc03f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000002d;
+  *((unsigned long*)& __m256i_result[2]) = 0xc02dc02dc02dc02d;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000002d;
+  *((unsigned long*)& __m256i_result[0]) = 0xc02dc02dc02dc02d;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0xed);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x60600000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x6060000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x60600000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x6060000000000000;
+  __m256i_out = __lasx_xvbitseli_b(__m256i_op0,__m256i_op1,0x60);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-arith.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-arith.c
new file mode 100644
index 00000000000..4b380aa0a8a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-arith.c
@@ -0,0 +1,6259 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0x00000001;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000002;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0x00000001;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000002;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0x00000001;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000002;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0x00000001;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000002;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x40b2bf4d;
+  *((int*)& __m256_op0[6]) = 0x30313031;
+  *((int*)& __m256_op0[5]) = 0x50005000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x40b2bf4d;
+  *((int*)& __m256_op0[2]) = 0x30313031;
+  *((int*)& __m256_op0[1]) = 0x50005000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x22be22be;
+  *((int*)& __m256_op1[5]) = 0x7fff7fff;
+  *((int*)& __m256_op1[4]) = 0xa2bea2be;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x22be22be;
+  *((int*)& __m256_op1[1]) = 0x7fff7fff;
+  *((int*)& __m256_op1[0]) = 0xa2bea2be;
+  *((int*)& __m256_result[7]) = 0x40b2bf4d;
+  *((int*)& __m256_result[6]) = 0x30313031;
+  *((int*)& __m256_result[5]) = 0x7fff7fff;
+  *((int*)& __m256_result[4]) = 0xa2bea2be;
+  *((int*)& __m256_result[3]) = 0x40b2bf4d;
+  *((int*)& __m256_result[2]) = 0x30313031;
+  *((int*)& __m256_result[1]) = 0x7fff7fff;
+  *((int*)& __m256_result[0]) = 0xa2bea2be;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00ff0000;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00ff0000;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00ff0000;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00ff0000;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x0000008c;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x0000008c;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x0000008c;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x0000008c;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000118;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000118;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffff8000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffff8000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffff8000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffff8000;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffff0101;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffff0101;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffff0101;
+  *((int*)& __m256_result[4]) = 0x00000001;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffff0101;
+  *((int*)& __m256_result[0]) = 0x00000001;
+  __m256_out = __lasx_xvfadd_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffff001f;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x007fe268;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffff001f;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x007fe268;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0xffff001f;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x007fe268;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0xffff001f;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x007fe268;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0xffff001f;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0xffff001f;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x7f800000;
+  *((int*)& __m256_op1[6]) = 0x7f800000;
+  *((int*)& __m256_op1[5]) = 0x7f800000;
+  *((int*)& __m256_op1[4]) = 0x7f800000;
+  *((int*)& __m256_op1[3]) = 0x7f800000;
+  *((int*)& __m256_op1[2]) = 0x7f800000;
+  *((int*)& __m256_op1[1]) = 0x7f800000;
+  *((int*)& __m256_op1[0]) = 0x7f800000;
+  *((int*)& __m256_result[7]) = 0xff800000;
+  *((int*)& __m256_result[6]) = 0xff800000;
+  *((int*)& __m256_result[5]) = 0xff800000;
+  *((int*)& __m256_result[4]) = 0xff800000;
+  *((int*)& __m256_result[3]) = 0xff800000;
+  *((int*)& __m256_result[2]) = 0xff800000;
+  *((int*)& __m256_result[1]) = 0xff800000;
+  *((int*)& __m256_result[0]) = 0xff800000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x02a54290;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x02a54290;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x02a54290;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x0154dc84;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x02a54290;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000089;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x82a54290;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x028aa700;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x82a54290;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x02a54287;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00004200;
+  *((int*)& __m256_op0[6]) = 0x80000000;
+  *((int*)& __m256_op0[5]) = 0x5fff5fff;
+  *((int*)& __m256_op0[4]) = 0x607f0000;
+  *((int*)& __m256_op0[3]) = 0x00004200;
+  *((int*)& __m256_op0[2]) = 0x80000000;
+  *((int*)& __m256_op0[1]) = 0x5fff5fff;
+  *((int*)& __m256_op0[0]) = 0x607f0000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00004200;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x5fff5fff;
+  *((int*)& __m256_result[4]) = 0x607f0000;
+  *((int*)& __m256_result[3]) = 0x00004200;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x5fff5fff;
+  *((int*)& __m256_result[0]) = 0x607f0000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00800080;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000202;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00800080;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000202;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00800080;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000202;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00800080;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000202;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffefffe;
+  *((int*)& __m256_op0[6]) = 0xfffefffe;
+  *((int*)& __m256_op0[5]) = 0xfffefffe;
+  *((int*)& __m256_op0[4]) = 0xfffefffe;
+  *((int*)& __m256_op0[3]) = 0xfffefffe;
+  *((int*)& __m256_op0[2]) = 0xfffefffe;
+  *((int*)& __m256_op0[1]) = 0xfffefffe;
+  *((int*)& __m256_op0[0]) = 0xfffefffe;
+  *((int*)& __m256_op1[7]) = 0x000023a3;
+  *((int*)& __m256_op1[6]) = 0x00003fff;
+  *((int*)& __m256_op1[5]) = 0x000023a3;
+  *((int*)& __m256_op1[4]) = 0x00003fef;
+  *((int*)& __m256_op1[3]) = 0x000023a3;
+  *((int*)& __m256_op1[2]) = 0x00003fff;
+  *((int*)& __m256_op1[1]) = 0x000023a3;
+  *((int*)& __m256_op1[0]) = 0x00003fef;
+  *((int*)& __m256_result[7]) = 0xfffefffe;
+  *((int*)& __m256_result[6]) = 0xfffefffe;
+  *((int*)& __m256_result[5]) = 0xfffefffe;
+  *((int*)& __m256_result[4]) = 0xfffefffe;
+  *((int*)& __m256_result[3]) = 0xfffefffe;
+  *((int*)& __m256_result[2]) = 0xfffefffe;
+  *((int*)& __m256_result[1]) = 0xfffefffe;
+  *((int*)& __m256_result[0]) = 0xfffefffe;
+  __m256_out = __lasx_xvfsub_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x10101011;
+  *((int*)& __m256_op1[4]) = 0x10101011;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x11111112;
+  *((int*)& __m256_op1[0]) = 0x11111112;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00060000;
+  *((int*)& __m256_op0[6]) = 0x00040000;
+  *((int*)& __m256_op0[5]) = 0x00020000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00060000;
+  *((int*)& __m256_op0[2]) = 0x00040000;
+  *((int*)& __m256_op0[1]) = 0x00020000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00060000;
+  *((int*)& __m256_op1[6]) = 0x00040000;
+  *((int*)& __m256_op1[5]) = 0x00020000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00060000;
+  *((int*)& __m256_op1[2]) = 0x00040000;
+  *((int*)& __m256_op1[1]) = 0x00020000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x000000ff;
+  *((int*)& __m256_op0[4]) = 0x000000ff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x000000ff;
+  *((int*)& __m256_op0[0]) = 0x000000ff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000101;
+  *((int*)& __m256_op1[4]) = 0x00000101;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000101;
+  *((int*)& __m256_op1[0]) = 0x00000101;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmul_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x002a542a;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x002a542a;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfdiv_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000001;
+  *((int*)& __m256_op0[6]) = 0x00000001;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000001;
+  *((int*)& __m256_op0[2]) = 0x00000001;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7fc00000;
+  *((int*)& __m256_result[4]) = 0x7fc00000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7fc00000;
+  *((int*)& __m256_result[0]) = 0x7fc00000;
+  __m256_out = __lasx_xvfdiv_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00fe00fe;
+  *((int*)& __m256_op0[6]) = 0x00fe00fe;
+  *((int*)& __m256_op0[5]) = 0x00fe00fe;
+  *((int*)& __m256_op0[4]) = 0x00fe00fe;
+  *((int*)& __m256_op0[3]) = 0x00fe00fe;
+  *((int*)& __m256_op0[2]) = 0x00fe00fe;
+  *((int*)& __m256_op0[1]) = 0x00fe00fe;
+  *((int*)& __m256_op0[0]) = 0x00fe00fe;
+  *((int*)& __m256_op1[7]) = 0x00fe00fe;
+  *((int*)& __m256_op1[6]) = 0x00fe00fe;
+  *((int*)& __m256_op1[5]) = 0x00fe00fe;
+  *((int*)& __m256_op1[4]) = 0x00fe00fe;
+  *((int*)& __m256_op1[3]) = 0x00fe00fe;
+  *((int*)& __m256_op1[2]) = 0x00fe00fe;
+  *((int*)& __m256_op1[1]) = 0x00fe00fe;
+  *((int*)& __m256_op1[0]) = 0x00fe00fe;
+  *((int*)& __m256_result[7]) = 0x3f800000;
+  *((int*)& __m256_result[6]) = 0x3f800000;
+  *((int*)& __m256_result[5]) = 0x3f800000;
+  *((int*)& __m256_result[4]) = 0x3f800000;
+  *((int*)& __m256_result[3]) = 0x3f800000;
+  *((int*)& __m256_result[2]) = 0x3f800000;
+  *((int*)& __m256_result[1]) = 0x3f800000;
+  *((int*)& __m256_result[0]) = 0x3f800000;
+  __m256_out = __lasx_xvfdiv_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7fc00000;
+  *((int*)& __m256_result[6]) = 0x7fc00000;
+  *((int*)& __m256_result[5]) = 0x7fc00000;
+  *((int*)& __m256_result[4]) = 0x7fc00000;
+  *((int*)& __m256_result[3]) = 0x7fc00000;
+  *((int*)& __m256_result[2]) = 0x7fc00000;
+  *((int*)& __m256_result[1]) = 0x7fc00000;
+  *((int*)& __m256_result[0]) = 0x7fc00000;
+  __m256_out = __lasx_xvfdiv_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x803f6004;
+  *((int*)& __m256_op0[4]) = 0x1f636003;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x803f6004;
+  *((int*)& __m256_op0[0]) = 0x1f636003;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x007f0107;
+  *((int*)& __m256_op1[4]) = 0x00c70106;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x007f0107;
+  *((int*)& __m256_op1[0]) = 0x00c70106;
+  *((int*)& __m256_result[7]) = 0x7fc00000;
+  *((int*)& __m256_result[6]) = 0x7fc00000;
+  *((int*)& __m256_result[5]) = 0xbeff7cfd;
+  *((int*)& __m256_result[4]) = 0x5e123f94;
+  *((int*)& __m256_result[3]) = 0x7fc00000;
+  *((int*)& __m256_result[2]) = 0x7fc00000;
+  *((int*)& __m256_result[1]) = 0xbeff7cfd;
+  *((int*)& __m256_result[0]) = 0x5e123f94;
+  __m256_out = __lasx_xvfdiv_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000008;
+  *((int*)& __m256_op0[6]) = 0x60601934;
+  *((int*)& __m256_op0[5]) = 0x00000008;
+  *((int*)& __m256_op0[4]) = 0x00200028;
+  *((int*)& __m256_op0[3]) = 0x00000008;
+  *((int*)& __m256_op0[2]) = 0x60601934;
+  *((int*)& __m256_op0[1]) = 0x00000008;
+  *((int*)& __m256_op0[0]) = 0x00200028;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfdiv_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffe06df0d7;
+  *((unsigned long*)& __m256d_op1[1]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffbe8b470f;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffbe8b470f;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x41d6600000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x41d6600000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x41d6600000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x41d6600000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7fffffffffffffff;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_result[2]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_result[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256d_result[0]) = 0x00007fff00007fff;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x000f000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x000f000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256d_result[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256d_result[0]) = 0x7fffffffa2beb040;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000001c000000134;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000001c000000134;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x000001c000000134;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x000001c000000134;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000038000000268;
+  *((unsigned long*)& __m256d_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000038000000268;
+  *((unsigned long*)& __m256d_result[0]) = 0x7fff7fff7fff7fff;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000001010100;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000405;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000001010100;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000405;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000001010100;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000405;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000001010100;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000405;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000040;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256d_result[3]) = 0x00000000ff890000;
+  *((unsigned long*)& __m256d_result[2]) = 0x00000000ff790000;
+  *((unsigned long*)& __m256d_result[1]) = 0x00000000ff890000;
+  *((unsigned long*)& __m256d_result[0]) = 0x00000000ff790000;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000000000000006d;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000000010006d;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000000000000006d;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000000000010006d;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000080040;
+  *((unsigned long*)& __m256d_result[3]) = 0x00000000000000ad;
+  *((unsigned long*)& __m256d_result[2]) = 0x00000000001800ad;
+  *((unsigned long*)& __m256d_result[1]) = 0x00000000000000ad;
+  *((unsigned long*)& __m256d_result[0]) = 0x00000000001800ad;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x2020000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x2020000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7fffffffffffffff;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffff8000;
+  __m256d_out = __lasx_xvfadd_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00007ffe81fdfe03;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x80007ffe81fdfe03;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xff00d5007f00ffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xff00d5007f00ffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x7f00d5007f00ffff;
+  *((unsigned long*)& __m256d_result[2]) = 0x7f00ffffff00ffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x7f00d5007f00ffff;
+  *((unsigned long*)& __m256d_result[0]) = 0x7f00ffffff00ffff;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffff00000002;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffff00000002;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffff00000002;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffff00000002;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_result[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_result[0]) = 0x00ff00fe00ff00fe;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsub_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmul_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000400000001;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000400000001;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmul_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfmul_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000010100000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000010100000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00008000003f0000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00390015003529c1;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00008000003f0000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00390015003529c1;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmul_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfmul_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256d_op0[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000005536aaaaac;
+  *((unsigned long*)& __m256d_op0[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0002555400000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0002555400000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffe367cc82f8989a;
+  *((unsigned long*)& __m256d_op0[2]) = 0x4f90000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffc3aaa8d58f43c8;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x41cc5bb8a95fd1eb;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x41cc5bb8a95fd1eb;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfdiv_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xf328dfff;
+  *((int*)& __m256_op1[1]) = 0x6651bfff;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x0000ffff;
+  *((int*)& __m256_op2[6]) = 0x0000ff80;
+  *((int*)& __m256_op2[5]) = 0x00004686;
+  *((int*)& __m256_op2[4]) = 0x00007f79;
+  *((int*)& __m256_op2[3]) = 0x0000ffff;
+  *((int*)& __m256_op2[2]) = 0x0000ffff;
+  *((int*)& __m256_op2[1]) = 0x0000f328;
+  *((int*)& __m256_op2[0]) = 0x0000dfff;
+  *((int*)& __m256_result[7]) = 0x0000ffff;
+  *((int*)& __m256_result[6]) = 0x0000ff80;
+  *((int*)& __m256_result[5]) = 0x00004686;
+  *((int*)& __m256_result[4]) = 0x00007f79;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0x0000ffff;
+  *((int*)& __m256_result[1]) = 0x0000f328;
+  *((int*)& __m256_result[0]) = 0x0000dfff;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xfff10000;
+  *((int*)& __m256_op0[4]) = 0xfff10000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xfff10000;
+  *((int*)& __m256_op0[0]) = 0xfff10000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xfff10000;
+  *((int*)& __m256_result[4]) = 0xfff10000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xfff10000;
+  *((int*)& __m256_result[0]) = 0xfff10000;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x803f6004;
+  *((int*)& __m256_op2[4]) = 0x1f636003;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x803f6004;
+  *((int*)& __m256_op2[0]) = 0x1f636003;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x803f6004;
+  *((int*)& __m256_result[4]) = 0x1f636003;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x803f6004;
+  *((int*)& __m256_result[0]) = 0x1f636003;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffb3430a;
+  *((int*)& __m256_op0[4]) = 0x006ed8b8;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffb3430a;
+  *((int*)& __m256_op0[0]) = 0x006ed8b8;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x000001ff;
+  *((int*)& __m256_op1[4]) = 0x000003fe;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x000001ff;
+  *((int*)& __m256_op1[0]) = 0x000003fe;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x000000ff;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x000000ff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xfff3430a;
+  *((int*)& __m256_result[4]) = 0x000000ff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xfff3430a;
+  *((int*)& __m256_result[0]) = 0x000000ff;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffeb683;
+  *((int*)& __m256_op0[6]) = 0x9ffffd80;
+  *((int*)& __m256_op0[5]) = 0xfffe97c0;
+  *((int*)& __m256_op0[4]) = 0x20010001;
+  *((int*)& __m256_op0[3]) = 0xfffeb683;
+  *((int*)& __m256_op0[2]) = 0x9ffffd80;
+  *((int*)& __m256_op0[1]) = 0xfffe97c0;
+  *((int*)& __m256_op0[0]) = 0x20010001;
+  *((int*)& __m256_op1[7]) = 0x00009fff;
+  *((int*)& __m256_op1[6]) = 0x9ffffd80;
+  *((int*)& __m256_op1[5]) = 0x0000ffff;
+  *((int*)& __m256_op1[4]) = 0x20010001;
+  *((int*)& __m256_op1[3]) = 0x00009fff;
+  *((int*)& __m256_op1[2]) = 0x9ffffd80;
+  *((int*)& __m256_op1[1]) = 0x0000ffff;
+  *((int*)& __m256_op1[0]) = 0x20010001;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00002080;
+  *((int*)& __m256_op2[4]) = 0xdf5b41cf;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00002080;
+  *((int*)& __m256_op2[0]) = 0xdf5b41cf;
+  *((int*)& __m256_result[7]) = 0xfffeb683;
+  *((int*)& __m256_result[6]) = 0x007ffd80;
+  *((int*)& __m256_result[5]) = 0xfffe97c0;
+  *((int*)& __m256_result[4]) = 0xdf5b41cf;
+  *((int*)& __m256_result[3]) = 0xfffeb683;
+  *((int*)& __m256_result[2]) = 0x007ffd80;
+  *((int*)& __m256_result[1]) = 0xfffe97c0;
+  *((int*)& __m256_result[0]) = 0xdf5b41cf;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0xfffeb664;
+  *((int*)& __m256_op1[6]) = 0x007ffd61;
+  *((int*)& __m256_op1[5]) = 0xfffe97a1;
+  *((int*)& __m256_op1[4]) = 0xdf5b41b0;
+  *((int*)& __m256_op1[3]) = 0xfffeb664;
+  *((int*)& __m256_op1[2]) = 0x007ffd61;
+  *((int*)& __m256_op1[1]) = 0xfffe97a1;
+  *((int*)& __m256_op1[0]) = 0xdf5b41b0;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x94d7fb52;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xfffeb664;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xfffe97a1;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xfffeb664;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xfffe97a1;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xb70036db;
+  *((int*)& __m256_op1[6]) = 0x12c4007e;
+  *((int*)& __m256_op1[5]) = 0xb7146213;
+  *((int*)& __m256_op1[4]) = 0xfc1e0049;
+  *((int*)& __m256_op1[3]) = 0x000000fe;
+  *((int*)& __m256_op1[2]) = 0xfe02fffe;
+  *((int*)& __m256_op1[1]) = 0xb71c413b;
+  *((int*)& __m256_op1[0]) = 0x199d04b5;
+  *((int*)& __m256_op2[7]) = 0xb70036db;
+  *((int*)& __m256_op2[6]) = 0x12c4007e;
+  *((int*)& __m256_op2[5]) = 0xb7146213;
+  *((int*)& __m256_op2[4]) = 0xfc1e0049;
+  *((int*)& __m256_op2[3]) = 0x000000fe;
+  *((int*)& __m256_op2[2]) = 0xfe02fffe;
+  *((int*)& __m256_op2[1]) = 0xb71c413b;
+  *((int*)& __m256_op2[0]) = 0x199d04b5;
+  *((int*)& __m256_result[7]) = 0x370036db;
+  *((int*)& __m256_result[6]) = 0x92c4007e;
+  *((int*)& __m256_result[5]) = 0x37146213;
+  *((int*)& __m256_result[4]) = 0x7c1e0049;
+  *((int*)& __m256_result[3]) = 0x800000fe;
+  *((int*)& __m256_result[2]) = 0x7e02fffe;
+  *((int*)& __m256_result[1]) = 0x371c413b;
+  *((int*)& __m256_result[0]) = 0x999d04b5;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x3f7f7f7e;
+  *((int*)& __m256_op1[4]) = 0xff800000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x3f7f7f7e;
+  *((int*)& __m256_op1[0]) = 0xff800000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x7fffffff;
+  *((int*)& __m256_op2[4]) = 0xff7fffff;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x7fffffff;
+  *((int*)& __m256_op2[0]) = 0xff7fffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x7fffffff;
+  *((int*)& __m256_result[4]) = 0x7fc00000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x7fffffff;
+  *((int*)& __m256_result[0]) = 0x7fc00000;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffafaf;
+  *((int*)& __m256_op0[4]) = 0xb3b3dc9d;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffafaf;
+  *((int*)& __m256_op0[0]) = 0xb3b3dc9d;
+  *((int*)& __m256_op1[7]) = 0x00020000;
+  *((int*)& __m256_op1[6]) = 0x00020000;
+  *((int*)& __m256_op1[5]) = 0x00220021;
+  *((int*)& __m256_op1[4]) = 0x004a007e;
+  *((int*)& __m256_op1[3]) = 0x00020000;
+  *((int*)& __m256_op1[2]) = 0x00020000;
+  *((int*)& __m256_op1[1]) = 0x00220021;
+  *((int*)& __m256_op1[0]) = 0x004a007e;
+  *((int*)& __m256_op2[7]) = 0x00000001;
+  *((int*)& __m256_op2[6]) = 0x00007f7f;
+  *((int*)& __m256_op2[5]) = 0x00000001;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000001;
+  *((int*)& __m256_op2[2]) = 0x00007f7f;
+  *((int*)& __m256_op2[1]) = 0x00000001;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000001;
+  *((int*)& __m256_result[6]) = 0x80007f7f;
+  *((int*)& __m256_result[5]) = 0xffffafaf;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000001;
+  *((int*)& __m256_result[2]) = 0x80007f7f;
+  *((int*)& __m256_result[1]) = 0xffffafaf;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0xffffffe5;
+  *((int*)& __m256_op2[6]) = 0xffffffe5;
+  *((int*)& __m256_op2[5]) = 0xffffffe5;
+  *((int*)& __m256_op2[4]) = 0xffffffe5;
+  *((int*)& __m256_op2[3]) = 0xffffffe5;
+  *((int*)& __m256_op2[2]) = 0xffffffe5;
+  *((int*)& __m256_op2[1]) = 0xffffffe5;
+  *((int*)& __m256_op2[0]) = 0xffffffe5;
+  *((int*)& __m256_result[7]) = 0xffffffe5;
+  *((int*)& __m256_result[6]) = 0xffffffe5;
+  *((int*)& __m256_result[5]) = 0xffffffe5;
+  *((int*)& __m256_result[4]) = 0xffffffe5;
+  *((int*)& __m256_result[3]) = 0xffffffe5;
+  *((int*)& __m256_result[2]) = 0xffffffe5;
+  *((int*)& __m256_result[1]) = 0xffffffe5;
+  *((int*)& __m256_result[0]) = 0xffffffe5;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xbfffffff;
+  *((int*)& __m256_op0[6]) = 0xffff8000;
+  *((int*)& __m256_op0[5]) = 0xbfff8000;
+  *((int*)& __m256_op0[4]) = 0x80000000;
+  *((int*)& __m256_op0[3]) = 0xbfffffff;
+  *((int*)& __m256_op0[2]) = 0xffff8000;
+  *((int*)& __m256_op0[1]) = 0xbfff8000;
+  *((int*)& __m256_op0[0]) = 0x80000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0xffff8000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0xffff8000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x02020102;
+  *((int*)& __m256_op1[6]) = 0x02020102;
+  *((int*)& __m256_op1[5]) = 0x02020102;
+  *((int*)& __m256_op1[4]) = 0x02020102;
+  *((int*)& __m256_op1[3]) = 0x02020102;
+  *((int*)& __m256_op1[2]) = 0x02020102;
+  *((int*)& __m256_op1[1]) = 0x02020102;
+  *((int*)& __m256_op1[0]) = 0x02020102;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000008;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000008;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000008;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000008;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000008;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000008;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000008;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000008;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000001;
+  *((int*)& __m256_op2[4]) = 0x00000001;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000001;
+  *((int*)& __m256_op2[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x80000001;
+  *((int*)& __m256_result[4]) = 0x80000001;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x80000001;
+  *((int*)& __m256_result[0]) = 0x80000001;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000040;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000040;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x40404040;
+  *((int*)& __m256_op2[6]) = 0x40404040;
+  *((int*)& __m256_op2[5]) = 0x40404040;
+  *((int*)& __m256_op2[4]) = 0x40404040;
+  *((int*)& __m256_op2[3]) = 0x40404040;
+  *((int*)& __m256_op2[2]) = 0x40404040;
+  *((int*)& __m256_op2[1]) = 0x40404040;
+  *((int*)& __m256_op2[0]) = 0x40404040;
+  *((int*)& __m256_result[7]) = 0xc0404040;
+  *((int*)& __m256_result[6]) = 0xc0404040;
+  *((int*)& __m256_result[5]) = 0xc0404040;
+  *((int*)& __m256_result[4]) = 0xc0404040;
+  *((int*)& __m256_result[3]) = 0xc0404040;
+  *((int*)& __m256_result[2]) = 0xc0404040;
+  *((int*)& __m256_result[1]) = 0xc0404040;
+  *((int*)& __m256_result[0]) = 0xc0404040;
+  __m256_out = __lasx_xvfmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffff5f5c;
+  *((int*)& __m256_op1[5]) = 0xffffffff;
+  *((int*)& __m256_op1[4]) = 0xffff5f5c;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffff5f5c;
+  *((int*)& __m256_op1[1]) = 0xffffffff;
+  *((int*)& __m256_op1[0]) = 0xffff5f5c;
+  *((int*)& __m256_op2[7]) = 0x0000000f;
+  *((int*)& __m256_op2[6]) = 0x0000000f;
+  *((int*)& __m256_op2[5]) = 0xff00ff0f;
+  *((int*)& __m256_op2[4]) = 0xff005f0f;
+  *((int*)& __m256_op2[3]) = 0x0000000f;
+  *((int*)& __m256_op2[2]) = 0x0000000f;
+  *((int*)& __m256_op2[1]) = 0xff00ff0f;
+  *((int*)& __m256_op2[0]) = 0xff005f0f;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffff5f5c;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffff5f5c;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffff5f5c;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffff5f5c;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00010001;
+  *((int*)& __m256_op0[6]) = 0x00010000;
+  *((int*)& __m256_op0[5]) = 0x020afefb;
+  *((int*)& __m256_op0[4]) = 0x08140000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000001;
+  *((int*)& __m256_op0[1]) = 0x0003fffc;
+  *((int*)& __m256_op0[0]) = 0x00060000;
+  *((int*)& __m256_op1[7]) = 0x80000000;
+  *((int*)& __m256_op1[6]) = 0x40000000;
+  *((int*)& __m256_op1[5]) = 0x40000000;
+  *((int*)& __m256_op1[4]) = 0x10000010;
+  *((int*)& __m256_op1[3]) = 0x80000000;
+  *((int*)& __m256_op1[2]) = 0x40000000;
+  *((int*)& __m256_op1[1]) = 0x80000000;
+  *((int*)& __m256_op1[0]) = 0x40000010;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x000000ff;
+  *((int*)& __m256_op2[4]) = 0x0001ffff;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x0000ffff;
+  *((int*)& __m256_op2[0]) = 0x00010000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80020000;
+  *((int*)& __m256_result[5]) = 0x828aff0b;
+  *((int*)& __m256_result[4]) = 0x8001ffff;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000002;
+  *((int*)& __m256_result[1]) = 0x8000ffff;
+  *((int*)& __m256_result[0]) = 0x800d0002;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x1f3d2101;
+  *((int*)& __m256_op0[6]) = 0x1f3d2101;
+  *((int*)& __m256_op0[5]) = 0x1f3d2101;
+  *((int*)& __m256_op0[4]) = 0xd07dbf01;
+  *((int*)& __m256_op0[3]) = 0x9f1fd080;
+  *((int*)& __m256_op0[2]) = 0x1f3d2101;
+  *((int*)& __m256_op0[1]) = 0x1f3d2101;
+  *((int*)& __m256_op0[0]) = 0xd07dbf01;
+  *((int*)& __m256_op1[7]) = 0x1d949d94;
+  *((int*)& __m256_op1[6]) = 0x9d949d95;
+  *((int*)& __m256_op1[5]) = 0x1d949d94;
+  *((int*)& __m256_op1[4]) = 0x9e1423d4;
+  *((int*)& __m256_op1[3]) = 0x1de9a03f;
+  *((int*)& __m256_op1[2]) = 0x3dd41d95;
+  *((int*)& __m256_op1[1]) = 0x1d949d94;
+  *((int*)& __m256_op1[0]) = 0x9e1423d4;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x8001b72e;
+  *((int*)& __m256_result[6]) = 0x0001b72e;
+  *((int*)& __m256_result[5]) = 0x8001b72e;
+  *((int*)& __m256_result[4]) = 0xaf12d5f0;
+  *((int*)& __m256_result[3]) = 0x00024763;
+  *((int*)& __m256_result[2]) = 0x9d9cb530;
+  *((int*)& __m256_result[1]) = 0x8001b72e;
+  *((int*)& __m256_result[0]) = 0xaf12d5f0;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x1f0fdf7f;
+  *((int*)& __m256_op0[6]) = 0x3e3b31d4;
+  *((int*)& __m256_op0[5]) = 0x7ff80000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x1f0fdf7f;
+  *((int*)& __m256_op0[2]) = 0x3e3b31d4;
+  *((int*)& __m256_op0[1]) = 0x7ff80000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x7ff80000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x7ff80000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x80000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x80000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x80000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x80000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x0000ffff;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x0000ffff;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000001;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000001;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000001;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000001;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000001;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000001;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000001;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000200;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000200;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000200;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000200;
+  *((int*)& __m256_op2[7]) = 0xffffffa0;
+  *((int*)& __m256_op2[6]) = 0x00000001;
+  *((int*)& __m256_op2[5]) = 0xffffffe0;
+  *((int*)& __m256_op2[4]) = 0x00000001;
+  *((int*)& __m256_op2[3]) = 0xffffffa0;
+  *((int*)& __m256_op2[2]) = 0x00000001;
+  *((int*)& __m256_op2[1]) = 0xffffffe0;
+  *((int*)& __m256_op2[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0xffffffa0;
+  *((int*)& __m256_result[6]) = 0x80000001;
+  *((int*)& __m256_result[5]) = 0xffffffe0;
+  *((int*)& __m256_result[4]) = 0x80000001;
+  *((int*)& __m256_result[3]) = 0xffffffa0;
+  *((int*)& __m256_result[2]) = 0x80000001;
+  *((int*)& __m256_result[1]) = 0xffffffe0;
+  *((int*)& __m256_result[0]) = 0x80000001;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x49810081;
+  *((int*)& __m256_op1[6]) = 0x4843ffe1;
+  *((int*)& __m256_op1[5]) = 0x49810081;
+  *((int*)& __m256_op1[4]) = 0x68410001;
+  *((int*)& __m256_op1[3]) = 0x49810081;
+  *((int*)& __m256_op1[2]) = 0x4843ffe1;
+  *((int*)& __m256_op1[1]) = 0x49810081;
+  *((int*)& __m256_op1[0]) = 0x68410001;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00009fff;
+  *((int*)& __m256_op0[6]) = 0x00002001;
+  *((int*)& __m256_op0[5]) = 0x0000ffff;
+  *((int*)& __m256_op0[4]) = 0x0000ffff;
+  *((int*)& __m256_op0[3]) = 0x00009fff;
+  *((int*)& __m256_op0[2]) = 0x00002001;
+  *((int*)& __m256_op0[1]) = 0x0000ffff;
+  *((int*)& __m256_op0[0]) = 0x0000ffff;
+  *((int*)& __m256_op1[7]) = 0xfffeb683;
+  *((int*)& __m256_op1[6]) = 0x9ffffd80;
+  *((int*)& __m256_op1[5]) = 0xfffe97c0;
+  *((int*)& __m256_op1[4]) = 0x20010001;
+  *((int*)& __m256_op1[3]) = 0xfffeb683;
+  *((int*)& __m256_op1[2]) = 0x9ffffd80;
+  *((int*)& __m256_op1[1]) = 0xfffe97c0;
+  *((int*)& __m256_op1[0]) = 0x20010001;
+  *((int*)& __m256_op2[7]) = 0x00009fff;
+  *((int*)& __m256_op2[6]) = 0x00002001;
+  *((int*)& __m256_op2[5]) = 0x0000ffff;
+  *((int*)& __m256_op2[4]) = 0x0000ffff;
+  *((int*)& __m256_op2[3]) = 0x00009fff;
+  *((int*)& __m256_op2[2]) = 0x00002001;
+  *((int*)& __m256_op2[1]) = 0x0000ffff;
+  *((int*)& __m256_op2[0]) = 0x0000ffff;
+  *((int*)& __m256_result[7]) = 0xfffeb683;
+  *((int*)& __m256_result[6]) = 0x80002001;
+  *((int*)& __m256_result[5]) = 0xfffe97c0;
+  *((int*)& __m256_result[4]) = 0x8000ffff;
+  *((int*)& __m256_result[3]) = 0xfffeb683;
+  *((int*)& __m256_result[2]) = 0x80002001;
+  *((int*)& __m256_result[1]) = 0xfffe97c0;
+  *((int*)& __m256_result[0]) = 0x8000ffff;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x7fffffff;
+  *((int*)& __m256_op0[6]) = 0x80000000;
+  *((int*)& __m256_op0[5]) = 0x7fffffff;
+  *((int*)& __m256_op0[4]) = 0x80000000;
+  *((int*)& __m256_op0[3]) = 0x7fffffff;
+  *((int*)& __m256_op0[2]) = 0x80000000;
+  *((int*)& __m256_op0[1]) = 0x7fffffff;
+  *((int*)& __m256_op0[0]) = 0x80000000;
+  *((int*)& __m256_op1[7]) = 0xfd02fd02;
+  *((int*)& __m256_op1[6]) = 0xfd02fd02;
+  *((int*)& __m256_op1[5]) = 0xfd02fd02;
+  *((int*)& __m256_op1[4]) = 0xfd02fd02;
+  *((int*)& __m256_op1[3]) = 0xfd02fd02;
+  *((int*)& __m256_op1[2]) = 0xfd02fd02;
+  *((int*)& __m256_op1[1]) = 0xfd02fd02;
+  *((int*)& __m256_op1[0]) = 0xfd02fd02;
+  *((int*)& __m256_op2[7]) = 0xfd02fd02;
+  *((int*)& __m256_op2[6]) = 0xfd02fd02;
+  *((int*)& __m256_op2[5]) = 0xfd02fd02;
+  *((int*)& __m256_op2[4]) = 0xfd02fd02;
+  *((int*)& __m256_op2[3]) = 0xfd02fd02;
+  *((int*)& __m256_op2[2]) = 0xfd02fd02;
+  *((int*)& __m256_op2[1]) = 0xfd02fd02;
+  *((int*)& __m256_op2[0]) = 0xfd02fd02;
+  *((int*)& __m256_result[7]) = 0x7fffffff;
+  *((int*)& __m256_result[6]) = 0x7d02fd02;
+  *((int*)& __m256_result[5]) = 0x7fffffff;
+  *((int*)& __m256_result[4]) = 0x7d02fd02;
+  *((int*)& __m256_result[3]) = 0x7fffffff;
+  *((int*)& __m256_result[2]) = 0x7d02fd02;
+  *((int*)& __m256_result[1]) = 0x7fffffff;
+  *((int*)& __m256_result[0]) = 0x7d02fd02;
+  __m256_out = __lasx_xvfnmadd_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xbf7f7fff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xe651bfff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0xffffffff;
+  *((int*)& __m256_op2[2]) = 0xf328dfff;
+  *((int*)& __m256_op2[1]) = 0x6651bfff;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x07070707;
+  *((int*)& __m256_op0[5]) = 0x01020400;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00020100;
+  *((int*)& __m256_op0[1]) = 0x07030200;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0xffffff80;
+  *((int*)& __m256_op1[6]) = 0xfefeff00;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x01000400;
+  *((int*)& __m256_op1[3]) = 0xffffff80;
+  *((int*)& __m256_op1[2]) = 0xfeff0000;
+  *((int*)& __m256_op1[1]) = 0x02020080;
+  *((int*)& __m256_op1[0]) = 0x5c800400;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0xffffffff;
+  *((int*)& __m256_op2[2]) = 0xf328dfff;
+  *((int*)& __m256_op2[1]) = 0x6651bfff;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffff80;
+  *((int*)& __m256_result[6]) = 0x46867f79;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xf328dfff;
+  *((int*)& __m256_result[1]) = 0x6651bfff;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xe0000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xe0000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xe0000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xe0000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x80000000;
+  *((int*)& __m256_op1[4]) = 0x80000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x80000000;
+  *((int*)& __m256_op1[0]) = 0x80000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x7f800000;
+  *((int*)& __m256_op2[6]) = 0x7f800000;
+  *((int*)& __m256_op2[5]) = 0x7fc00000;
+  *((int*)& __m256_op2[4]) = 0x7fc00000;
+  *((int*)& __m256_op2[3]) = 0x7f800000;
+  *((int*)& __m256_op2[2]) = 0x7f800000;
+  *((int*)& __m256_op2[1]) = 0x7fc00000;
+  *((int*)& __m256_op2[0]) = 0x7fc00000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7fc00000;
+  *((int*)& __m256_result[4]) = 0x7fc00000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7fc00000;
+  *((int*)& __m256_result[0]) = 0x7fc00000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x7fefffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0x7fefffff;
+  *((int*)& __m256_op1[4]) = 0xffffffff;
+  *((int*)& __m256_op1[3]) = 0x7fefffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0x7fefffff;
+  *((int*)& __m256_op1[0]) = 0xffffffff;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7fefffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0x7fefffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x7fefffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0x7fefffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xf7f8f7f8;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00003f78;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xf7f8f7f8;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00003f78;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0xf7f8f7f8;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00003f78;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0xf7f8f7f8;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00003f78;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0xff800000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0xff800000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0xffffffff;
+  *((int*)& __m256_op2[4]) = 0xffffffff;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0xffffffff;
+  *((int*)& __m256_op2[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x01010100;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000405;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x01010100;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000405;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x01010100;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000405;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x01010100;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000405;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x01010100;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x00000405;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x01010100;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x00000405;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00800080;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000202;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00800080;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000202;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0xff88ff88;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0xff88ff88;
+  *((int*)& __m256_op2[7]) = 0x00000000;
+  *((int*)& __m256_op2[6]) = 0x00000000;
+  *((int*)& __m256_op2[5]) = 0x00000000;
+  *((int*)& __m256_op2[4]) = 0x00000000;
+  *((int*)& __m256_op2[3]) = 0x00000000;
+  *((int*)& __m256_op2[2]) = 0x00000000;
+  *((int*)& __m256_op2[1]) = 0x00000000;
+  *((int*)& __m256_op2[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0xffc8ff88;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0xffc8ff88;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_op2[7]) = 0x001fffff;
+  *((int*)& __m256_op2[6]) = 0xffffffff;
+  *((int*)& __m256_op2[5]) = 0xffffffff;
+  *((int*)& __m256_op2[4]) = 0xffffffff;
+  *((int*)& __m256_op2[3]) = 0x001fffff;
+  *((int*)& __m256_op2[2]) = 0xffffffff;
+  *((int*)& __m256_op2[1]) = 0xffffffff;
+  *((int*)& __m256_op2[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x001fffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x001fffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x7fff8000;
+  *((int*)& __m256_op1[4]) = 0x7fff0000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x7fff8000;
+  *((int*)& __m256_op1[0]) = 0x7fff0000;
+  *((int*)& __m256_op2[7]) = 0xffffffff;
+  *((int*)& __m256_op2[6]) = 0xffffffff;
+  *((int*)& __m256_op2[5]) = 0xffffffff;
+  *((int*)& __m256_op2[4]) = 0xffffff10;
+  *((int*)& __m256_op2[3]) = 0xffffffff;
+  *((int*)& __m256_op2[2]) = 0xffffffff;
+  *((int*)& __m256_op2[1]) = 0xffffffff;
+  *((int*)& __m256_op2[0]) = 0xffffff10;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffff10;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffff10;
+  __m256_out = __lasx_xvfnmsub_s(__m256_op0,__m256_op1,__m256_op2);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xe37affb42fc05f69;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x65fb66c81da8e5ba;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256d_op2[2]) = 0x00d6c1c830160048;
+  *((unsigned long*)& __m256d_op2[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256d_op2[0]) = 0xe3aebaf4df958004;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0x00d6c1c830160048;
+  *((unsigned long*)& __m256d_result[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256d_result[0]) = 0xe3aebaf4df958004;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000f3280000dfff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0xfe02fe02fee5fe22;
+  *((unsigned long*)& __m256d_op1[0]) = 0xff49fe4200000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x00020001ffb6ffe0;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0049004200000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xbf28b0686066be60;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0xc5c5c5c5c5c5c5c5;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x2);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000000000f1a40;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000aaaa0000aaaa;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0202810102020202;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0202810102020202;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x00007fff00000000;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256d_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x000000000000ffff;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256d_op1[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256d_op1[0]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256d_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000100010001;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffff000000;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[3]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256d_op2[2]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256d_op2[1]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256d_op2[0]) = 0xd3d3d3d3d3d3d3d3;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256d_op2[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_op2[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_op2[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_op2[0]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffff5f5c;
+  __m256d_out = __lasx_xvfmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000007380;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000000000f1c00;
+  *((unsigned long*)& __m256d_op2[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op2[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256d_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op2[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256d_result[3]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0x80000000ffff8c80;
+  *((unsigned long*)& __m256d_result[1]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0x80000000fff0e400;
+  __m256d_out = __lasx_xvfmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x80000000000001dc;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x80000000000001dc;
+  __m256d_out = __lasx_xvfmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0404000004040000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0404000004040000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op1[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256d_op1[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256d_op2[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256d_op2[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256d_op2[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256d_op2[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256d_result[2]) = 0x80003fc00000428a;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256d_result[0]) = 0x80003fc00000428a;
+  __m256d_out = __lasx_xvfmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op2[2]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op2[0]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000100000001;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000100000001;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffb2f600006f48;
+  __m256d_out = __lasx_xvfmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0001010101010101;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000010100;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0001000001000100;
+  *((unsigned long*)& __m256d_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256d_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[0]) = 0xffffffffe651bfff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffe651bfff;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x3ff73ff83ff73ff8;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x3ff73ff83ff73ff8;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256d_op2[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256d_op2[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256d_op2[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256d_result[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256d_result[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256d_result[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256d_result[0]) = 0xa020202020206431;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256d_op0[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256d_op0[0]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256d_op2[2]) = 0x7f7f7f5c8f374980;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256d_op2[0]) = 0x7f7f7f5c8f374980;
+  *((unsigned long*)& __m256d_result[3]) = 0x8001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x8001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256d_op0[2]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256d_op0[1]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256d_op0[0]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256d_op1[3]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256d_op1[2]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256d_op1[1]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256d_op1[0]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x1080108010060002;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x1080108010060002;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256d_op2[3]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7fff00017fff0000;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x1716151417161514;
+  *((unsigned long*)& __m256d_op0[2]) = 0x1716151417161514;
+  *((unsigned long*)& __m256d_op0[1]) = 0x1716151417161514;
+  *((unsigned long*)& __m256d_op0[0]) = 0x1716151417161514;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000002780;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000002780;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000002780;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000002780;
+  __m256d_out = __lasx_xvfnmadd_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0080200000802000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0080200000802000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0080200000802000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0080200000802000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256d_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256d_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256d_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffba0c05;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000005000000020;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000005000000020;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0010000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0010000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0008000000000000;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256d_op0[1]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffff800300000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffe0000000;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000700000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000700000000;
+  *((unsigned long*)& __m256d_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfnmsub_d(__m256d_op0,__m256d_op1,__m256d_op2);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00010101;
+  *((int*)& __m256_op1[6]) = 0x01010101;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00010100;
+  *((int*)& __m256_op1[1]) = 0x00010000;
+  *((int*)& __m256_op1[0]) = 0x01000100;
+  *((int*)& __m256_result[7]) = 0x00010101;
+  *((int*)& __m256_result[6]) = 0x01010101;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00010100;
+  *((int*)& __m256_result[1]) = 0x00010000;
+  *((int*)& __m256_result[0]) = 0x01000100;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x59800000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x59800000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x59800000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x59800000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00010001;
+  *((int*)& __m256_op1[6]) = 0x00010001;
+  *((int*)& __m256_op1[5]) = 0x00010001;
+  *((int*)& __m256_op1[4]) = 0x00010001;
+  *((int*)& __m256_op1[3]) = 0x00010001;
+  *((int*)& __m256_op1[2]) = 0x00010001;
+  *((int*)& __m256_op1[1]) = 0x00010001;
+  *((int*)& __m256_op1[0]) = 0x00010001;
+  *((int*)& __m256_result[7]) = 0x00010001;
+  *((int*)& __m256_result[6]) = 0x00010001;
+  *((int*)& __m256_result[5]) = 0x00010001;
+  *((int*)& __m256_result[4]) = 0x00010001;
+  *((int*)& __m256_result[3]) = 0x00010001;
+  *((int*)& __m256_result[2]) = 0x00010001;
+  *((int*)& __m256_result[1]) = 0x00010001;
+  *((int*)& __m256_result[0]) = 0x00010001;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x7fefffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x7fefffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x000000ff;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x000000ff;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00003fe0;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00003fe0;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00003fe0;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00003fe0;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x0000000e;
+  *((int*)& __m256_op1[6]) = 0x0000000e;
+  *((int*)& __m256_op1[5]) = 0x0000000e;
+  *((int*)& __m256_op1[4]) = 0x0000000e;
+  *((int*)& __m256_op1[3]) = 0x0000000e;
+  *((int*)& __m256_op1[2]) = 0x0000000e;
+  *((int*)& __m256_op1[1]) = 0x0000000e;
+  *((int*)& __m256_op1[0]) = 0x0000000e;
+  *((int*)& __m256_result[7]) = 0x0000000e;
+  *((int*)& __m256_result[6]) = 0x0000000e;
+  *((int*)& __m256_result[5]) = 0x0000000e;
+  *((int*)& __m256_result[4]) = 0x0000000e;
+  *((int*)& __m256_result[3]) = 0x0000000e;
+  *((int*)& __m256_result[2]) = 0x0000000e;
+  *((int*)& __m256_result[1]) = 0x0000000e;
+  *((int*)& __m256_result[0]) = 0x0000000e;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffdbbbcf;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xffb8579f;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffdbbbcf;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffb8579f;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0xfff8579f;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0xfff8579f;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x40404040;
+  *((int*)& __m256_op1[6]) = 0x40404040;
+  *((int*)& __m256_op1[5]) = 0x40404040;
+  *((int*)& __m256_op1[4]) = 0x40404040;
+  *((int*)& __m256_op1[3]) = 0x40404040;
+  *((int*)& __m256_op1[2]) = 0x40404040;
+  *((int*)& __m256_op1[1]) = 0x40404040;
+  *((int*)& __m256_op1[0]) = 0x40404040;
+  *((int*)& __m256_result[7]) = 0x40404040;
+  *((int*)& __m256_result[6]) = 0x40404040;
+  *((int*)& __m256_result[5]) = 0x40404040;
+  *((int*)& __m256_result[4]) = 0x40404040;
+  *((int*)& __m256_result[3]) = 0x40404040;
+  *((int*)& __m256_result[2]) = 0x40404040;
+  *((int*)& __m256_result[1]) = 0x40404040;
+  *((int*)& __m256_result[0]) = 0x40404040;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x0000006d;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x0010006d;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x0000006d;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x0010006d;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00080040;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00080040;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00080040;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00080040;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00080040;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x0010006d;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00080040;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x0010006d;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x000002ff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x000002ff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x000002ff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x000002ff;
+  __m256_out = __lasx_xvfmax_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmax_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m256d_op1[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256d_op1[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256d_op1[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256d_op1[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256d_result[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256d_result[0]) = 0x45c5c5c545c5c5c5;
+  __m256d_out = __lasx_xvfmax_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000004290;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000002a96ba;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000004290;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000002a96ba;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000083f95466;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0101010100005400;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000004290;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000083f95466;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000004290;
+  *((unsigned long*)& __m256d_result[0]) = 0x0101010100005400;
+  __m256d_out = __lasx_xvfmax_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmax_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmax_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0101000101010001;
+  __m256d_out = __lasx_xvfmax_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0200000202000002;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0200000202000002;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0101000101010001;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256d_op1[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256d_op1[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256d_op1[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmin_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00005555;
+  *((int*)& __m256_op1[6]) = 0x00005555;
+  *((int*)& __m256_op1[5]) = 0x000307ff;
+  *((int*)& __m256_op1[4]) = 0xfe72e815;
+  *((int*)& __m256_op1[3]) = 0x00005555;
+  *((int*)& __m256_op1[2]) = 0x00005555;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000015;
+  *((int*)& __m256_result[7]) = 0x00005555;
+  *((int*)& __m256_result[6]) = 0x00005555;
+  *((int*)& __m256_result[5]) = 0x000307ff;
+  *((int*)& __m256_result[4]) = 0xfe72e815;
+  *((int*)& __m256_result[3]) = 0x00005555;
+  *((int*)& __m256_result[2]) = 0x00005555;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000015;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00ff00ff;
+  *((int*)& __m256_op0[6]) = 0x00ff00ff;
+  *((int*)& __m256_op0[5]) = 0x00ff00ff;
+  *((int*)& __m256_op0[4]) = 0x000c0000;
+  *((int*)& __m256_op0[3]) = 0x00ff00ff;
+  *((int*)& __m256_op0[2]) = 0x00ff00ff;
+  *((int*)& __m256_op0[1]) = 0x00ff00ff;
+  *((int*)& __m256_op0[0]) = 0x00040000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00ff00ff;
+  *((int*)& __m256_result[6]) = 0x00ff00ff;
+  *((int*)& __m256_result[5]) = 0x00ff00ff;
+  *((int*)& __m256_result[4]) = 0x000c0000;
+  *((int*)& __m256_result[3]) = 0x00ff00ff;
+  *((int*)& __m256_result[2]) = 0x00ff00ff;
+  *((int*)& __m256_result[1]) = 0x00ff00ff;
+  *((int*)& __m256_result[0]) = 0x00040000;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x000007ff;
+  *((int*)& __m256_op0[6]) = 0x000007ff;
+  *((int*)& __m256_op0[5]) = 0x000007ff;
+  *((int*)& __m256_op0[4]) = 0xfffff800;
+  *((int*)& __m256_op0[3]) = 0x000007ff;
+  *((int*)& __m256_op0[2]) = 0x000007ff;
+  *((int*)& __m256_op0[1]) = 0x000007ff;
+  *((int*)& __m256_op0[0]) = 0xfffff800;
+  *((int*)& __m256_op1[7]) = 0xffffffff;
+  *((int*)& __m256_op1[6]) = 0xffffffff;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0xffffffff;
+  *((int*)& __m256_op1[2]) = 0xffffffff;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x000007ff;
+  *((int*)& __m256_result[6]) = 0x000007ff;
+  *((int*)& __m256_result[5]) = 0x000007ff;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x000007ff;
+  *((int*)& __m256_result[2]) = 0x000007ff;
+  *((int*)& __m256_result[1]) = 0x000007ff;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000001;
+  *((int*)& __m256_op0[5]) = 0x001f00e0;
+  *((int*)& __m256_op0[4]) = 0x1f1f1fff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000001;
+  *((int*)& __m256_op0[1]) = 0x001f00e0;
+  *((int*)& __m256_op0[0]) = 0x1f1f1fff;
+  *((int*)& __m256_op1[7]) = 0x80000000;
+  *((int*)& __m256_op1[6]) = 0x80000000;
+  *((int*)& __m256_op1[5]) = 0x80000000;
+  *((int*)& __m256_op1[4]) = 0xff800000;
+  *((int*)& __m256_op1[3]) = 0x80000000;
+  *((int*)& __m256_op1[2]) = 0x80000000;
+  *((int*)& __m256_op1[1]) = 0x80000000;
+  *((int*)& __m256_op1[0]) = 0xff800000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000001;
+  *((int*)& __m256_result[5]) = 0x001f00e0;
+  *((int*)& __m256_result[4]) = 0xff800000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000001;
+  *((int*)& __m256_result[1]) = 0x001f00e0;
+  *((int*)& __m256_result[0]) = 0xff800000;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000001;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000001;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00009fff;
+  *((int*)& __m256_op0[6]) = 0x00002001;
+  *((int*)& __m256_op0[5]) = 0x0000ffff;
+  *((int*)& __m256_op0[4]) = 0x0000ffff;
+  *((int*)& __m256_op0[3]) = 0x00009fff;
+  *((int*)& __m256_op0[2]) = 0x00002001;
+  *((int*)& __m256_op0[1]) = 0x0000ffff;
+  *((int*)& __m256_op0[0]) = 0x0000ffff;
+  *((int*)& __m256_op1[7]) = 0xfffeb683;
+  *((int*)& __m256_op1[6]) = 0x9ffffd80;
+  *((int*)& __m256_op1[5]) = 0xfffe97c0;
+  *((int*)& __m256_op1[4]) = 0x20010001;
+  *((int*)& __m256_op1[3]) = 0xfffeb683;
+  *((int*)& __m256_op1[2]) = 0x9ffffd80;
+  *((int*)& __m256_op1[1]) = 0xfffe97c0;
+  *((int*)& __m256_op1[0]) = 0x20010001;
+  *((int*)& __m256_result[7]) = 0x00009fff;
+  *((int*)& __m256_result[6]) = 0x9ffffd80;
+  *((int*)& __m256_result[5]) = 0x0000ffff;
+  *((int*)& __m256_result[4]) = 0x20010001;
+  *((int*)& __m256_result[3]) = 0x00009fff;
+  *((int*)& __m256_result[2]) = 0x9ffffd80;
+  *((int*)& __m256_result[1]) = 0x0000ffff;
+  *((int*)& __m256_result[0]) = 0x20010001;
+  __m256_out = __lasx_xvfmaxa_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000170;
+  *((int*)& __m256_op0[6]) = 0x00000080;
+  *((int*)& __m256_op0[5]) = 0xc0650055;
+  *((int*)& __m256_op0[4]) = 0x0055ffab;
+  *((int*)& __m256_op0[3]) = 0x00000170;
+  *((int*)& __m256_op0[2]) = 0x00000080;
+  *((int*)& __m256_op0[1]) = 0xc0650055;
+  *((int*)& __m256_op0[0]) = 0x0055ffab;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xffff0000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffff0000;
+  *((int*)& __m256_op1[7]) = 0xfffefffe;
+  *((int*)& __m256_op1[6]) = 0xfffefffe;
+  *((int*)& __m256_op1[5]) = 0xfffefffe;
+  *((int*)& __m256_op1[4]) = 0xfffefffe;
+  *((int*)& __m256_op1[3]) = 0xfffefffe;
+  *((int*)& __m256_op1[2]) = 0xfffefffe;
+  *((int*)& __m256_op1[1]) = 0xfffefffe;
+  *((int*)& __m256_op1[0]) = 0xfffefffe;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0xffff0000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0xffff0000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00fe01f0;
+  *((int*)& __m256_op0[6]) = 0x00010000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00c40086;
+  *((int*)& __m256_op0[3]) = 0x00fe01f0;
+  *((int*)& __m256_op0[2]) = 0x00010000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00c40086;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x82a54290;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x028aa700;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x82a54290;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x02a54287;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00010000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00c40086;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00010000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00c40086;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x02a54290;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x0154dc84;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x02a54290;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000089;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x02a54290;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x0154dc84;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x02a54290;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000089;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x02a54290;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x0154dc84;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x02a54290;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000089;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x04000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x04000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00100000;
+  *((int*)& __m256_op0[6]) = 0x00100000;
+  *((int*)& __m256_op0[5]) = 0x00100000;
+  *((int*)& __m256_op0[4]) = 0x00100000;
+  *((int*)& __m256_op0[3]) = 0x00100000;
+  *((int*)& __m256_op0[2]) = 0x00100000;
+  *((int*)& __m256_op0[1]) = 0x00100000;
+  *((int*)& __m256_op0[0]) = 0x00100000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000010;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000010;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000080;
+  *((int*)& __m256_op0[6]) = 0x00000080;
+  *((int*)& __m256_op0[5]) = 0x00000080;
+  *((int*)& __m256_op0[4]) = 0x00000080;
+  *((int*)& __m256_op0[3]) = 0x00000080;
+  *((int*)& __m256_op0[2]) = 0x00000080;
+  *((int*)& __m256_op0[1]) = 0x00000080;
+  *((int*)& __m256_op0[0]) = 0x00000080;
+  *((int*)& __m256_op1[7]) = 0x00000001;
+  *((int*)& __m256_op1[6]) = 0x00000001;
+  *((int*)& __m256_op1[5]) = 0x00000001;
+  *((int*)& __m256_op1[4]) = 0x00000001;
+  *((int*)& __m256_op1[3]) = 0x00000001;
+  *((int*)& __m256_op1[2]) = 0x00000001;
+  *((int*)& __m256_op1[1]) = 0x00000001;
+  *((int*)& __m256_op1[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0x00000001;
+  *((int*)& __m256_result[6]) = 0x00000001;
+  *((int*)& __m256_result[5]) = 0x00000001;
+  *((int*)& __m256_result[4]) = 0x00000001;
+  *((int*)& __m256_result[3]) = 0x00000001;
+  *((int*)& __m256_result[2]) = 0x00000001;
+  *((int*)& __m256_result[1]) = 0x00000001;
+  *((int*)& __m256_result[0]) = 0x00000001;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfmina_s(__m256_op0,__m256_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x000000040000fff8;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffff8001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000018;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000018;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000018;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000018;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0002000000010000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0002000000010000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000001;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmaxa_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000008000000080;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256d_op1[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256d_op1[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256d_op1[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x00000000000000ff;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7efefefe80ffffff;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0087ff87f807ff87;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0087ff87f807ff87;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfmina_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvflogb_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x10101010;
+  *((int*)& __m256_op0[6]) = 0x10101012;
+  *((int*)& __m256_op0[5]) = 0x10101010;
+  *((int*)& __m256_op0[4]) = 0x10101012;
+  *((int*)& __m256_op0[3]) = 0x10101010;
+  *((int*)& __m256_op0[2]) = 0x10101093;
+  *((int*)& __m256_op0[1]) = 0x11111111;
+  *((int*)& __m256_op0[0]) = 0x11111113;
+  *((int*)& __m256_result[7]) = 0xc2be0000;
+  *((int*)& __m256_result[6]) = 0xc2be0000;
+  *((int*)& __m256_result[5]) = 0xc2be0000;
+  *((int*)& __m256_result[4]) = 0xc2be0000;
+  *((int*)& __m256_result[3]) = 0xc2be0000;
+  *((int*)& __m256_result[2]) = 0xc2be0000;
+  *((int*)& __m256_result[1]) = 0xc2ba0000;
+  *((int*)& __m256_result[0]) = 0xc2ba0000;
+  __m256_out = __lasx_xvflogb_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xff800000;
+  *((int*)& __m256_result[6]) = 0xff800000;
+  *((int*)& __m256_result[5]) = 0xff800000;
+  *((int*)& __m256_result[4]) = 0xff800000;
+  *((int*)& __m256_result[3]) = 0xff800000;
+  *((int*)& __m256_result[2]) = 0xff800000;
+  *((int*)& __m256_result[1]) = 0xff800000;
+  *((int*)& __m256_result[0]) = 0xff800000;
+  __m256_out = __lasx_xvflogb_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xff800000;
+  *((int*)& __m256_result[6]) = 0xff800000;
+  *((int*)& __m256_result[5]) = 0xff800000;
+  *((int*)& __m256_result[4]) = 0xff800000;
+  *((int*)& __m256_result[3]) = 0xff800000;
+  *((int*)& __m256_result[2]) = 0xff800000;
+  *((int*)& __m256_result[1]) = 0xff800000;
+  *((int*)& __m256_result[0]) = 0xff800000;
+  __m256_out = __lasx_xvflogb_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000087;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000087;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xff800000;
+  *((int*)& __m256_result[6]) = 0xff800000;
+  *((int*)& __m256_result[5]) = 0xc30e0000;
+  *((int*)& __m256_result[4]) = 0xff800000;
+  *((int*)& __m256_result[3]) = 0xff800000;
+  *((int*)& __m256_result[2]) = 0xff800000;
+  *((int*)& __m256_result[1]) = 0xc30e0000;
+  *((int*)& __m256_result[0]) = 0xff800000;
+  __m256_out = __lasx_xvflogb_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xc08f780000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256d_result[1]) = 0xc08f780000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvflogb_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvflogb_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvflogb_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvflogb_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvflogb_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvflogb_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((int*)& __m256_op0[7]) = 0xfffffff8;
+  *((int*)& __m256_op0[6]) = 0xffffff08;
+  *((int*)& __m256_op0[5]) = 0x00ff00f8;
+  *((int*)& __m256_op0[4]) = 0x00ffcff8;
+  *((int*)& __m256_op0[3]) = 0xfffffff8;
+  *((int*)& __m256_op0[2]) = 0xffffff08;
+  *((int*)& __m256_op0[1]) = 0x00ff00f8;
+  *((int*)& __m256_op0[0]) = 0x00ffcff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000008000000080;
+  __m256i_out = __lasx_xvfclass_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvfclass_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000020000000200;
+  __m256i_out = __lasx_xvfclass_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x000000ff;
+  *((int*)& __m256_op0[4]) = 0x000000ff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x000000ff;
+  *((int*)& __m256_op0[0]) = 0x000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000000100;
+  __m256i_out = __lasx_xvfclass_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffffffb;
+  *((int*)& __m256_op0[6]) = 0xfffffffb;
+  *((int*)& __m256_op0[5]) = 0xfffffffb;
+  *((int*)& __m256_op0[4]) = 0xfffffffb;
+  *((int*)& __m256_op0[3]) = 0xfffffffb;
+  *((int*)& __m256_op0[2]) = 0xfffffffb;
+  *((int*)& __m256_op0[1]) = 0xfffffffb;
+  *((int*)& __m256_op0[0]) = 0xfffffffb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvfclass_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000020000000200;
+  __m256i_out = __lasx_xvfclass_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000017f0000017d;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000017f0000017f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256d_op0[2]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256d_op0[1]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256d_op0[0]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256d_op0[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256d_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000080;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvfclass_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x000000ff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x0000ff00;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xfc003802;
+  *((int*)& __m256_op0[6]) = 0xfc000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xfc00fc00;
+  *((int*)& __m256_op0[3]) = 0xfc003802;
+  *((int*)& __m256_op0[2]) = 0xfc000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xfc00fc00;
+  *((int*)& __m256_result[7]) = 0x82ff902d;
+  *((int*)& __m256_result[6]) = 0x83000000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x82fe0bd9;
+  *((int*)& __m256_result[3]) = 0x82ff902d;
+  *((int*)& __m256_result[2]) = 0x83000000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x82fe0bd9;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xfd02fd02;
+  *((int*)& __m256_op0[6]) = 0xfd02fd02;
+  *((int*)& __m256_op0[5]) = 0xfd02fd02;
+  *((int*)& __m256_op0[4]) = 0xfd02fd02;
+  *((int*)& __m256_op0[3]) = 0xfd02fd02;
+  *((int*)& __m256_op0[2]) = 0xfd02fd02;
+  *((int*)& __m256_op0[1]) = 0xfd02fd02;
+  *((int*)& __m256_op0[0]) = 0xfd02fd02;
+  *((int*)& __m256_result[7]) = 0x81fa28e4;
+  *((int*)& __m256_result[6]) = 0x81fa28e4;
+  *((int*)& __m256_result[5]) = 0x81fa28e4;
+  *((int*)& __m256_result[4]) = 0x81fa28e4;
+  *((int*)& __m256_result[3]) = 0x81fa28e4;
+  *((int*)& __m256_result[2]) = 0x81fa28e4;
+  *((int*)& __m256_result[1]) = 0x81fa28e4;
+  *((int*)& __m256_result[0]) = 0x81fa28e4;
+  __m256_out = __lasx_xvfrecip_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x0000ff80;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x0000ffff;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x60b53246;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x60b5054d;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0x0060005a;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0x0060005a;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0x5f13ccf5;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0x5f13ccf5;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x7f800000;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x7f800000;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000002;
+  *((int*)& __m256_op0[4]) = 0x00000008;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000002;
+  *((int*)& __m256_op0[0]) = 0x00000008;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x64800000;
+  *((int*)& __m256_result[4]) = 0x64000000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x64800000;
+  *((int*)& __m256_result[0]) = 0x64000000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x000000bd;
+  *((int*)& __m256_op0[4]) = 0xfef907bc;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x000000bd;
+  *((int*)& __m256_op0[0]) = 0xfef907bc;
+  *((int*)& __m256_result[7]) = 0x7f800000;
+  *((int*)& __m256_result[6]) = 0x7f800000;
+  *((int*)& __m256_result[5]) = 0x62d2acee;
+  *((int*)& __m256_result[4]) = 0x7fc00000;
+  *((int*)& __m256_result[3]) = 0x7f800000;
+  *((int*)& __m256_result[2]) = 0x7f800000;
+  *((int*)& __m256_result[1]) = 0x62d2acee;
+  *((int*)& __m256_result[0]) = 0x7fc00000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x04e8296f;
+  *((int*)& __m256_op0[6]) = 0x18181818;
+  *((int*)& __m256_op0[5]) = 0x132feea9;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x04e8296f;
+  *((int*)& __m256_op0[2]) = 0x18181818;
+  *((int*)& __m256_op0[1]) = 0x132feea9;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x5cbe15f2;
+  *((int*)& __m256_result[6]) = 0x53261036;
+  *((int*)& __m256_result[5]) = 0x559a674d;
+  *((int*)& __m256_result[4]) = 0x7f800000;
+  *((int*)& __m256_result[3]) = 0x5cbe15f2;
+  *((int*)& __m256_result[2]) = 0x53261036;
+  *((int*)& __m256_result[1]) = 0x559a674d;
+  *((int*)& __m256_result[0]) = 0x7f800000;
+  __m256_out = __lasx_xvfrsqrt_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x1e1800001e180000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x1e1800001e180000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x2f03988e2052463e;
+  *((unsigned long*)& __m256d_result[2]) = 0x2f03988e1409212e;
+  *((unsigned long*)& __m256d_result[1]) = 0x2f03988e2052463e;
+  *((unsigned long*)& __m256d_result[0]) = 0x2f03988e1409212e;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256d_op0[0]) = 0xff874dc687870000;
+  *((unsigned long*)& __m256d_result[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000100000018;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000100000018;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x1f60000000c00000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x1f60000000c00000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0003030300000300;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0003030300000300;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0003030300000100;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0003030300000100;
+  *((unsigned long*)& __m256d_result[3]) = 0x1febc46085090ea0;
+  *((unsigned long*)& __m256d_result[2]) = 0x1febc46085090ea0;
+  *((unsigned long*)& __m256d_result[1]) = 0x1febc46085090567;
+  *((unsigned long*)& __m256d_result[0]) = 0x1febc46085090567;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x1f9689fdb16cabbd;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x1f9689fdb16cabbd;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffff0000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000010000000100;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000010000000100;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x1fa0000000080000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffff8000;
+  __m256d_out = __lasx_xvfsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256d_op0[2]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256d_op0[1]) = 0x03fc03fc03f803f8;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256d_result[2]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256d_result[1]) = 0x7be2468acf15f39c;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256d_op0[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256d_op0[0]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xaf0489001bd4c0c3;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xaf0489001bd4c0c3;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000fffff614;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000fffff614;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xff80000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8060000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8060000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrecip_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffff00000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0209fefb08140000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256d_result[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256d_result[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x5ff00007fff9fff3;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x555555553f800000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x353bb67af686ad9b;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x353bb67af686ad9b;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000001f0000ffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x60000007fffe0001;
+  *((unsigned long*)& __m256d_result[2]) = 0x60000007fffe0001;
+  *((unsigned long*)& __m256d_result[1]) = 0x6056fd4e7926d5c0;
+  *((unsigned long*)& __m256d_result[0]) = 0x6056fd4e1a4616c4;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00001bfa000000f9;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000f900004040;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00001bfa000000f9;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000000f900004040;
+  *((unsigned long*)& __m256d_result[3]) = 0x60183329ceb52cf0;
+  *((unsigned long*)& __m256d_result[2]) = 0x6040392cdaf9b3ff;
+  *((unsigned long*)& __m256d_result[1]) = 0x60183329ceb52cf0;
+  *((unsigned long*)& __m256d_result[0]) = 0x6040392cdaf9b3ff;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x3de00103153ff5fb;
+  *((unsigned long*)& __m256d_op0[2]) = 0xbffffffe80000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x3de00103153ff5fb;
+  *((unsigned long*)& __m256d_op0[0]) = 0xbffffffe80000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff8000000000000;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256d_result[3]) = 0x606a20bd700e59a3;
+  *((unsigned long*)& __m256d_result[2]) = 0x6066a09e66c5f1bb;
+  *((unsigned long*)& __m256d_result[1]) = 0x606a20bd700e59a3;
+  *((unsigned long*)& __m256d_result[0]) = 0x6066a09e66c5f1bb;
+  __m256d_out = __lasx_xvfrsqrt_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-cvt.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-cvt.c
new file mode 100644
index 00000000000..584d37ceaa5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-fp-cvt.c
@@ -0,0 +1,7315 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf000000000000000;
+  *((int*)& __m256_result[7]) = 0xc6000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xc6000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvtl_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc080ffff0049ffd2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffeffb9ff9d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00010000002fff9e;
+  *((int*)& __m256_result[7]) = 0x34000000;
+  *((int*)& __m256_result[6]) = 0xfff00000;
+  *((int*)& __m256_result[5]) = 0xfff6e000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x33800000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x363c0000;
+  *((int*)& __m256_result[0]) = 0xfff3c000;
+  __m256_out = __lasx_xvfcvtl_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvtl_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc1d75053f0000000;
+  *((int*)& __m256_result[7]) = 0xc03ae000;
+  *((int*)& __m256_result[6]) = 0x420a6000;
+  *((int*)& __m256_result[5]) = 0xc6000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xc03ae000;
+  *((int*)& __m256_result[2]) = 0x420a6000;
+  *((int*)& __m256_result[1]) = 0xc6000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvtl_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x03802fc000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x03802fc000000000;
+  *((int*)& __m256_result[7]) = 0x38600000;
+  *((int*)& __m256_result[6]) = 0x3df80000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x38600000;
+  *((int*)& __m256_result[2]) = 0x3df80000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvtl_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffe0000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x80000000;
+  *((int*)& __m256_op0[6]) = 0x80000000;
+  *((int*)& __m256_op0[5]) = 0x80000000;
+  *((int*)& __m256_op0[4]) = 0xff800000;
+  *((int*)& __m256_op0[3]) = 0x80000000;
+  *((int*)& __m256_op0[2]) = 0x80000000;
+  *((int*)& __m256_op0[1]) = 0x80000000;
+  *((int*)& __m256_op0[0]) = 0xff800000;
+  *((unsigned long*)& __m256d_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfff0000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffe0000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvtl_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0404010008080808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0408010008080808;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0404010008080808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0408010008080808;
+  *((int*)& __m256_result[7]) = 0x38808000;
+  *((int*)& __m256_result[6]) = 0x37800000;
+  *((int*)& __m256_result[5]) = 0x39010000;
+  *((int*)& __m256_result[4]) = 0x39010000;
+  *((int*)& __m256_result[3]) = 0x38808000;
+  *((int*)& __m256_result[2]) = 0x37800000;
+  *((int*)& __m256_result[1]) = 0x39010000;
+  *((int*)& __m256_result[0]) = 0x39010000;
+  __m256_out = __lasx_xvfcvth_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvth_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvth_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000100010001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000100010001fffe;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x80000000;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x80000000;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfcvth_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvth_s_h(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x0000aaaa;
+  *((int*)& __m256_op0[6]) = 0x00008bfe;
+  *((int*)& __m256_op0[5]) = 0x0000aaaa;
+  *((int*)& __m256_op0[4]) = 0x0000aaaa;
+  *((int*)& __m256_op0[3]) = 0x0000aaaa;
+  *((int*)& __m256_op0[2]) = 0x00008bfe;
+  *((int*)& __m256_op0[1]) = 0x0000aaaa;
+  *((int*)& __m256_op0[0]) = 0x0000aaaa;
+  *((unsigned long*)& __m256d_result[3]) = 0x3795554000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x37917fc000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x3795554000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x37917fc000000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffe0000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00020006;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00020006;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00020006;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00020006;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x37b0003000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x37b0003000000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffffff0;
+  *((int*)& __m256_op0[6]) = 0xfffffff0;
+  *((int*)& __m256_op0[5]) = 0xfffffff0;
+  *((int*)& __m256_op0[4]) = 0xfffffff0;
+  *((int*)& __m256_op0[3]) = 0xfffffff0;
+  *((int*)& __m256_op0[2]) = 0xfffffff0;
+  *((int*)& __m256_op0[1]) = 0xfffffff0;
+  *((int*)& __m256_op0[0]) = 0xfffffff0;
+  *((unsigned long*)& __m256d_result[3]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfffffffe00000000;
+  __m256d_out = __lasx_xvfcvth_d_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000003;
+  *((int*)& __m256_op1[6]) = 0x0000000c;
+  *((int*)& __m256_op1[5]) = 0x00000011;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000005;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000008;
+  *((int*)& __m256_op1[0]) = 0x00000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[6]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[5]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[4]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[3]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[2]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[1]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[0]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[7]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[6]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[5]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[4]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[3]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[2]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[1]) = 0x6d6d6d6d;
+  *((int*)& __m256_op1[0]) = 0x6d6d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[0]) = 0x7c007c007c007c00;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00020000;
+  *((int*)& __m256_op1[6]) = 0x00020000;
+  *((int*)& __m256_op1[5]) = 0x00020000;
+  *((int*)& __m256_op1[4]) = 0x00010000;
+  *((int*)& __m256_op1[3]) = 0x00020000;
+  *((int*)& __m256_op1[2]) = 0x00020000;
+  *((int*)& __m256_op1[1]) = 0x00020000;
+  *((int*)& __m256_op1[0]) = 0x00010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_op1[7]) = 0x71717171;
+  *((int*)& __m256_op1[6]) = 0x71010101;
+  *((int*)& __m256_op1[5]) = 0x8e8e8e8e;
+  *((int*)& __m256_op1[4]) = 0x8f00ffff;
+  *((int*)& __m256_op1[3]) = 0x71717171;
+  *((int*)& __m256_op1[2]) = 0x71010101;
+  *((int*)& __m256_op1[1]) = 0x8e8e8e8e;
+  *((int*)& __m256_op1[0]) = 0x8f00ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7c007c0080008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7c007c0080008000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xfff10000;
+  *((int*)& __m256_op0[4]) = 0xfff10000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xfff10000;
+  *((int*)& __m256_op0[0]) = 0xfff10000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0xfff10000;
+  *((int*)& __m256_op1[4]) = 0xfff10000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0xfff10000;
+  *((int*)& __m256_op1[0]) = 0xfff10000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff88ff88;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00040000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00040000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xff00ff00;
+  *((int*)& __m256_op0[6]) = 0x3f003f00;
+  *((int*)& __m256_op0[5]) = 0xff0101fd;
+  *((int*)& __m256_op0[4]) = 0x00010100;
+  *((int*)& __m256_op0[3]) = 0xff00ff00;
+  *((int*)& __m256_op0[2]) = 0x3f003f00;
+  *((int*)& __m256_op0[1]) = 0xff0101fd;
+  *((int*)& __m256_op0[0]) = 0x00010100;
+  *((int*)& __m256_op1[7]) = 0x01ffff43;
+  *((int*)& __m256_op1[6]) = 0x00fffeff;
+  *((int*)& __m256_op1[5]) = 0xfe0000bc;
+  *((int*)& __m256_op1[4]) = 0xff000100;
+  *((int*)& __m256_op1[3]) = 0x01ffff43;
+  *((int*)& __m256_op1[2]) = 0x00fffeff;
+  *((int*)& __m256_op1[1]) = 0xfe0000bc;
+  *((int*)& __m256_op1[0]) = 0xff000100;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fc00fc00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fc00fc00;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_op1[7]) = 0x00000000;
+  *((int*)& __m256_op1[6]) = 0x00000000;
+  *((int*)& __m256_op1[5]) = 0x00000000;
+  *((int*)& __m256_op1[4]) = 0x00000000;
+  *((int*)& __m256_op1[3]) = 0x00000000;
+  *((int*)& __m256_op1[2]) = 0x00000000;
+  *((int*)& __m256_op1[1]) = 0x00000000;
+  *((int*)& __m256_op1[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfcvt_h_s(__m256_op0,__m256_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0cc08723ff900001;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xcc9b89f2f6cef440;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffff00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xfffffff8;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xff800000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xfffffff8;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0xff800000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xff800000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256d_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xff800000ff800000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xff800000;
+  *((int*)& __m256_result[4]) = 0xff800000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xff800000;
+  *((int*)& __m256_result[0]) = 0xff800000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xf7f8f7f8f800f800;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00003f784000ff80;
+  *((unsigned long*)& __m256d_op1[1]) = 0xf7f8f7f84000fff9;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00003f784000ff80;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xff800000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xff800000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000555500005555;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000555500005555;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000555500005555;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000555500005555;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffb6804cb9;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffb7bbdec0;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffb680489b;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffb7bc02a0;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xfffffffd;
+  *((int*)& __m256_result[4]) = 0xfffffffd;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xfffffffd;
+  *((int*)& __m256_result[0]) = 0xfffffffd;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0101010202020203;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0101010201010102;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0101010202020203;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0101010201010102;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256d_op1[2]) = 0x3fff3fff3fff3fc4;
+  *((unsigned long*)& __m256d_op1[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256d_op1[0]) = 0x3fff3fff3fff3fc4;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x3ff9fffa;
+  *((int*)& __m256_result[4]) = 0x3ff9fffa;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x3ff9fffa;
+  *((int*)& __m256_result[0]) = 0x3ff9fffa;
+  __m256_out = __lasx_xvfcvt_s_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffff5f5c;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffff605a;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffff5f5c;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffff605a;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffff5f5c;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffff605a;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffff5f5c;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffff605a;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xc5c5c5c4;
+  *((int*)& __m256_op0[6]) = 0xc5c5c5c4;
+  *((int*)& __m256_op0[5]) = 0x45c5c5c5;
+  *((int*)& __m256_op0[4]) = 0x45c5c5c5;
+  *((int*)& __m256_op0[3]) = 0xc5c5c5c4;
+  *((int*)& __m256_op0[2]) = 0xc5c5c5c4;
+  *((int*)& __m256_op0[1]) = 0x45c5c5c5;
+  *((int*)& __m256_op0[0]) = 0x45c5c5c5;
+  *((int*)& __m256_result[7]) = 0xc5c5c800;
+  *((int*)& __m256_result[6]) = 0xc5c5c800;
+  *((int*)& __m256_result[5]) = 0x45c5c800;
+  *((int*)& __m256_result[4]) = 0x45c5c800;
+  *((int*)& __m256_result[3]) = 0xc5c5c800;
+  *((int*)& __m256_result[2]) = 0xc5c5c800;
+  *((int*)& __m256_result[1]) = 0x45c5c800;
+  *((int*)& __m256_result[0]) = 0x45c5c800;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffff6f20;
+  *((int*)& __m256_op0[5]) = 0x0000781e;
+  *((int*)& __m256_op0[4]) = 0x0000f221;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffff6f20;
+  *((int*)& __m256_op0[1]) = 0x0000781e;
+  *((int*)& __m256_op0[0]) = 0x0000f221;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0xffff6f20;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0xffff6f20;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffffb3b4;
+  *((int*)& __m256_op0[5]) = 0xfffffff5;
+  *((int*)& __m256_op0[4]) = 0xffff4738;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffffb3b4;
+  *((int*)& __m256_op0[1]) = 0xfffffff5;
+  *((int*)& __m256_op0[0]) = 0xffff4738;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0xffffb3b4;
+  *((int*)& __m256_result[5]) = 0xfffffff5;
+  *((int*)& __m256_result[4]) = 0xffff4738;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0xffffb3b4;
+  *((int*)& __m256_result[1]) = 0xfffffff5;
+  *((int*)& __m256_result[0]) = 0xffff4738;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00ff0000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00ff0000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00ff0000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00ff0000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00003fea;
+  *((int*)& __m256_op0[6]) = 0x00013feb;
+  *((int*)& __m256_op0[5]) = 0x00003fe9;
+  *((int*)& __m256_op0[4]) = 0x00014022;
+  *((int*)& __m256_op0[3]) = 0x00003fea;
+  *((int*)& __m256_op0[2]) = 0x00013feb;
+  *((int*)& __m256_op0[1]) = 0x00003fe9;
+  *((int*)& __m256_op0[0]) = 0x00014022;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrint_s(__m256_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002000400000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200020006;
+  unsigned_int_result = 0x0000000000020006;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x0);
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000008050501;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000008050501;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_result[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_result[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_result[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_result[0]) = 0xfffffffffffffff8;
+  __m256d_out = __lasx_xvfrint_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x01010101;
+  *((int*)& __m256_op0[6]) = 0x01010101;
+  *((int*)& __m256_op0[5]) = 0x01010101;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x01010101;
+  *((int*)& __m256_op0[2]) = 0x01010101;
+  *((int*)& __m256_op0[1]) = 0x01010101;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000300;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000303;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xfffffffe;
+  *((int*)& __m256_op0[5]) = 0xfffffffe;
+  *((int*)& __m256_op0[4]) = 0xfffffefc;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xfffffffe;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xfffffffe;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xfffffffe;
+  *((int*)& __m256_result[5]) = 0xfffffffe;
+  *((int*)& __m256_result[4]) = 0xfffffefc;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xfffffffe;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xfffffffe;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x0001c4e8;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x0001c4e8;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x80000000;
+  *((int*)& __m256_op0[6]) = 0x80000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x80000000;
+  *((int*)& __m256_op0[2]) = 0x80000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x80000000;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x80000000;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xf5fffc00;
+  *((int*)& __m256_op0[6]) = 0xfc000000;
+  *((int*)& __m256_op0[5]) = 0xf5fffc00;
+  *((int*)& __m256_op0[4]) = 0xfc000000;
+  *((int*)& __m256_op0[3]) = 0xf5fffc00;
+  *((int*)& __m256_op0[2]) = 0xfc000000;
+  *((int*)& __m256_op0[1]) = 0xf5fffc00;
+  *((int*)& __m256_op0[0]) = 0xfc000000;
+  *((int*)& __m256_result[7]) = 0xf5fffc00;
+  *((int*)& __m256_result[6]) = 0xfc000000;
+  *((int*)& __m256_result[5]) = 0xf5fffc00;
+  *((int*)& __m256_result[4]) = 0xfc000000;
+  *((int*)& __m256_result[3]) = 0xf5fffc00;
+  *((int*)& __m256_result[2]) = 0xfc000000;
+  *((int*)& __m256_result[1]) = 0xf5fffc00;
+  *((int*)& __m256_result[0]) = 0xfc000000;
+  __m256_out = __lasx_xvfrintrz_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x55555555;
+  *((int*)& __m256_op0[6]) = 0x36aaaaac;
+  *((int*)& __m256_op0[5]) = 0x55555555;
+  *((int*)& __m256_op0[4]) = 0xaaaaaaac;
+  *((int*)& __m256_op0[3]) = 0x55555555;
+  *((int*)& __m256_op0[2]) = 0x36aaaaac;
+  *((int*)& __m256_op0[1]) = 0x55555555;
+  *((int*)& __m256_op0[0]) = 0xaaaaaaac;
+  *((int*)& __m256_result[7]) = 0x55555555;
+  *((int*)& __m256_result[6]) = 0x3f800000;
+  *((int*)& __m256_result[5]) = 0x55555555;
+  *((int*)& __m256_result[4]) = 0x80000000;
+  *((int*)& __m256_result[3]) = 0x55555555;
+  *((int*)& __m256_result[2]) = 0x3f800000;
+  *((int*)& __m256_result[1]) = 0x55555555;
+  *((int*)& __m256_result[0]) = 0x80000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffc741;
+  *((int*)& __m256_op0[6]) = 0x8a023680;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffff8845;
+  *((int*)& __m256_op0[2]) = 0xbb954b00;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffc741;
+  *((int*)& __m256_result[6]) = 0x80000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xffff8845;
+  *((int*)& __m256_result[2]) = 0x80000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00200101;
+  *((int*)& __m256_op0[6]) = 0x01610000;
+  *((int*)& __m256_op0[5]) = 0x00612000;
+  *((int*)& __m256_op0[4]) = 0x00610000;
+  *((int*)& __m256_op0[3]) = 0x00200101;
+  *((int*)& __m256_op0[2]) = 0x01610000;
+  *((int*)& __m256_op0[1]) = 0x00612000;
+  *((int*)& __m256_op0[0]) = 0x00610000;
+  *((int*)& __m256_result[7]) = 0x3f800000;
+  *((int*)& __m256_result[6]) = 0x3f800000;
+  *((int*)& __m256_result[5]) = 0x3f800000;
+  *((int*)& __m256_result[4]) = 0x3f800000;
+  *((int*)& __m256_result[3]) = 0x3f800000;
+  *((int*)& __m256_result[2]) = 0x3f800000;
+  *((int*)& __m256_result[1]) = 0x3f800000;
+  *((int*)& __m256_result[0]) = 0x3f800000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xfefefefe;
+  *((int*)& __m256_op0[4]) = 0x01010101;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xfefefefe;
+  *((int*)& __m256_op0[0]) = 0x01010101;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0xfefefefe;
+  *((int*)& __m256_result[4]) = 0x3f800000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0xfefefefe;
+  *((int*)& __m256_result[0]) = 0x3f800000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x1c1c1c1c;
+  *((int*)& __m256_op0[6]) = 0x1c1c1c1c;
+  *((int*)& __m256_op0[5]) = 0xfffffffe;
+  *((int*)& __m256_op0[4]) = 0xffffff00;
+  *((int*)& __m256_op0[3]) = 0x1c1c1c1c;
+  *((int*)& __m256_op0[2]) = 0x1c1c1c1c;
+  *((int*)& __m256_op0[1]) = 0xfffffffe;
+  *((int*)& __m256_op0[0]) = 0xffffff00;
+  *((int*)& __m256_result[7]) = 0x3f800000;
+  *((int*)& __m256_result[6]) = 0x3f800000;
+  *((int*)& __m256_result[5]) = 0xfffffffe;
+  *((int*)& __m256_result[4]) = 0xffffff00;
+  *((int*)& __m256_result[3]) = 0x3f800000;
+  *((int*)& __m256_result[2]) = 0x3f800000;
+  *((int*)& __m256_result[1]) = 0xfffffffe;
+  *((int*)& __m256_result[0]) = 0xffffff00;
+  __m256_out = __lasx_xvfrintrp_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000008;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00080000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x0000ffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x0000ffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x5d20a0a1;
+  *((int*)& __m256_op0[6]) = 0x5d20a0a1;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x5d20a0a1;
+  *((int*)& __m256_op0[2]) = 0x5d20a0a1;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x5d20a0a1;
+  *((int*)& __m256_result[6]) = 0x5d20a0a1;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x5d20a0a1;
+  *((int*)& __m256_result[2]) = 0x5d20a0a1;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x001d001d;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x001d001d;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000033;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000033;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrm_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000080008001;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7c00000880008000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((int*)& __m256_result[7]) = 0xffffffff;
+  *((int*)& __m256_result[6]) = 0xffffffff;
+  *((int*)& __m256_result[5]) = 0xffffffff;
+  *((int*)& __m256_result[4]) = 0xffffffff;
+  *((int*)& __m256_result[3]) = 0xffffffff;
+  *((int*)& __m256_result[2]) = 0xffffffff;
+  *((int*)& __m256_result[1]) = 0xffffffff;
+  *((int*)& __m256_result[0]) = 0xffffffff;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256d_op0[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256d_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256d_result[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256d_result[0]) = 0x6040190d00000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256d_op0[2]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256d_op0[0]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x4084800000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x4084800000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_result[3]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_result[2]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_result[1]) = 0xffff0001ffff0001;
+  *((unsigned long*)& __m256d_result[0]) = 0xffff0001ffff0001;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x3fffbfff80000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00004000007f8000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x3fffbfff80000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00004000007f8000;
+  *((unsigned long*)& __m256d_result[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x01010101;
+  *((int*)& __m256_op0[6]) = 0x01010101;
+  *((int*)& __m256_op0[5]) = 0x01010101;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x01010101;
+  *((int*)& __m256_op0[2]) = 0x01010101;
+  *((int*)& __m256_op0[1]) = 0x01010101;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvfrintrne_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrne_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000800000098;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000040000ffca;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000800000098;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000000040000ff79;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x7ff0000000000000;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrz_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x3ff0000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xfffffefe00000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000100da000100fd;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0001ffe20001fefd;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0001009a000100fd;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0001ff640001fefd;
+  *((unsigned long*)& __m256d_result[3]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x3ff0000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256d_op0[2]) = 0x01fc03fc01fc03fc;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256d_op0[0]) = 0x01fc03fc01fc03fc;
+  *((unsigned long*)& __m256d_result[3]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256d_result[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256d_result[0]) = 0x3ff0000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256d_result[3]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256d_result[1]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xfc00000000000048;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x8000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_result[3]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_result[2]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_result[1]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256d_result[0]) = 0xfffffff0fffffff0;
+  __m256d_out = __lasx_xvfrintrp_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x017e017e01dd61de;
+  *((unsigned long*)& __m256d_op0[2]) = 0x5d637d043bc4fc43;
+  *((unsigned long*)& __m256d_op0[1]) = 0x01dcc2dce31bc35d;
+  *((unsigned long*)& __m256d_op0[0]) = 0x5e041d245b85fc43;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x5d637d043bc4fc43;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x5e041d245b85fc43;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_op0[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_result[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_result[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_result[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256d_result[0]) = 0x7c007c007c007c00;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[0]) = 0xffffffffffffffff;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_int_result = 0x0000000000000000;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x5);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvfrintrm_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((int*)& __m256_op0[7]) = 0x0000ffff;
+  *((int*)& __m256_op0[6]) = 0xc0008001;
+  *((int*)& __m256_op0[5]) = 0x0000ffff;
+  *((int*)& __m256_op0[4]) = 0xc0008001;
+  *((int*)& __m256_op0[3]) = 0x0000ffff;
+  *((int*)& __m256_op0[2]) = 0xc0008001;
+  *((int*)& __m256_op0[1]) = 0x0000ffff;
+  *((int*)& __m256_op0[0]) = 0xc0008001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffffe;
+  __m256i_out = __lasx_xvftint_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x3f3f3f3c;
+  *((int*)& __m256_op0[5]) = 0xc6c6c6c6;
+  *((int*)& __m256_op0[4]) = 0x8787878a;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x3f3f3f3c;
+  *((int*)& __m256_op0[1]) = 0x8787878a;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff9c9d00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x1f0fdf7f;
+  *((int*)& __m256_op0[6]) = 0x3e3b31d4;
+  *((int*)& __m256_op0[5]) = 0x7ff80000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x1f0fdf7f;
+  *((int*)& __m256_op0[2]) = 0x3e3b31d4;
+  *((int*)& __m256_op0[1]) = 0x7ff80000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x55555555;
+  *((int*)& __m256_op0[5]) = 0x00000001;
+  *((int*)& __m256_op0[4]) = 0x00000004;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x55555555;
+  *((int*)& __m256_op0[1]) = 0x00000001;
+  *((int*)& __m256_op0[0]) = 0x00000004;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffcb423a587053;
+  *((unsigned long*)& __m256d_op0[2]) = 0x6d46f43e71141b81;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffcb423a584528;
+  *((unsigned long*)& __m256d_op0[0]) = 0x9bdf36c8d78158a1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x40000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x40000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffff7e;
+  *((int*)& __m256_op0[4]) = 0xffffff46;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffff7e;
+  *((int*)& __m256_op0[0]) = 0xffffff46;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x0fffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x0fffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x0fffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x0fffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfd12fd12;
+  *((int*)& __m256_op0[6]) = 0xfd12fd12;
+  *((int*)& __m256_op0[5]) = 0xfd12fd12;
+  *((int*)& __m256_op0[4]) = 0xfd12fd12;
+  *((int*)& __m256_op0[3]) = 0xfd12fd12;
+  *((int*)& __m256_op0[2]) = 0xfd12fd12;
+  *((int*)& __m256_op0[1]) = 0xfd12fd12;
+  *((int*)& __m256_op0[0]) = 0xfd12fd12;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvftintrne_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x002e2100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x55555555;
+  *((int*)& __m256_op0[6]) = 0x55555555;
+  *((int*)& __m256_op0[5]) = 0x5d5d5d5d;
+  *((int*)& __m256_op0[4]) = 0x5d555d55;
+  *((int*)& __m256_op0[3]) = 0x55555555;
+  *((int*)& __m256_op0[2]) = 0x55555555;
+  *((int*)& __m256_op0[1]) = 0x5d5ca2a3;
+  *((int*)& __m256_op0[0]) = 0x5d54aaab;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xffeeffaf;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000011;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xffeeffaf;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000011;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00ff00ff;
+  *((int*)& __m256_op0[6]) = 0x00ff00ff;
+  *((int*)& __m256_op0[5]) = 0x00ff00ff;
+  *((int*)& __m256_op0[4]) = 0x00ff00ff;
+  *((int*)& __m256_op0[3]) = 0x00ff00ff;
+  *((int*)& __m256_op0[2]) = 0x00ff00ff;
+  *((int*)& __m256_op0[1]) = 0x00ff00ff;
+  *((int*)& __m256_op0[0]) = 0x00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x001d001d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x81fa28e4;
+  *((int*)& __m256_op0[6]) = 0x81fa28e4;
+  *((int*)& __m256_op0[5]) = 0x81fa28e4;
+  *((int*)& __m256_op0[4]) = 0x81fa28e4;
+  *((int*)& __m256_op0[3]) = 0x81fa28e4;
+  *((int*)& __m256_op0[2]) = 0x81fa28e4;
+  *((int*)& __m256_op0[1]) = 0x81fa28e4;
+  *((int*)& __m256_op0[0]) = 0x81fa28e4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x1828f0e09bad7249;
+  *((unsigned long*)& __m256d_op0[2]) = 0x07ffc1b723953cec;
+  *((unsigned long*)& __m256d_op0[1]) = 0x61f2e9b333aab104;
+  *((unsigned long*)& __m256d_op0[0]) = 0x6bf742aa0d7856a0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00ffffff1e9e9e9e;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffff9e9eb09e;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00ffffff1e9e9e9e;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffff9e9eb09e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffe4ffe6;
+  *((int*)& __m256_op0[6]) = 0xffe5ffe6;
+  *((int*)& __m256_op0[5]) = 0xffe4ffe6;
+  *((int*)& __m256_op0[4]) = 0xffe5ffe6;
+  *((int*)& __m256_op0[3]) = 0xffe4ffe6;
+  *((int*)& __m256_op0[2]) = 0xffe5ffe6;
+  *((int*)& __m256_op0[1]) = 0xffe4ffe6;
+  *((int*)& __m256_op0[0]) = 0xffe5ffe6;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000001;
+  *((int*)& __m256_op0[4]) = 0x00010102;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x80008000;
+  *((int*)& __m256_op0[6]) = 0x80008000;
+  *((int*)& __m256_op0[5]) = 0x80008000;
+  *((int*)& __m256_op0[4]) = 0x80008000;
+  *((int*)& __m256_op0[3]) = 0x80008000;
+  *((int*)& __m256_op0[2]) = 0x80008000;
+  *((int*)& __m256_op0[1]) = 0x80008000;
+  *((int*)& __m256_op0[0]) = 0x80008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x10000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x10000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00ff00ff;
+  *((int*)& __m256_op0[6]) = 0x00ff00ff;
+  *((int*)& __m256_op0[5]) = 0x00ff00ff;
+  *((int*)& __m256_op0[4]) = 0x00ff00ff;
+  *((int*)& __m256_op0[3]) = 0x00ff00ff;
+  *((int*)& __m256_op0[2]) = 0x00ff00ff;
+  *((int*)& __m256_op0[1]) = 0x00ff00ff;
+  *((int*)& __m256_op0[0]) = 0x00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvftintrp_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffefffe;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xfffefffe;
+  *((int*)& __m256_op0[2]) = 0xfffefffd;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x0707feb6;
+  *((int*)& __m256_op0[6]) = 0x0707b7d0;
+  *((int*)& __m256_op0[5]) = 0x45baa7ef;
+  *((int*)& __m256_op0[4]) = 0x6a95a985;
+  *((int*)& __m256_op0[3]) = 0x0707feb6;
+  *((int*)& __m256_op0[2]) = 0x0707b7d0;
+  *((int*)& __m256_op0[1]) = 0x45baa7ef;
+  *((int*)& __m256_op0[0]) = 0x6a95a985;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000017547fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000017547fffffff;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[6]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[5]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[4]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[3]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[2]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[1]) = 0x6d6d6d6d;
+  *((int*)& __m256_op0[0]) = 0x6d6d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xfff10000;
+  *((int*)& __m256_op0[4]) = 0xfff10000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xfff10000;
+  *((int*)& __m256_op0[0]) = 0xfff10000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0xfdfcfda8;
+  *((int*)& __m256_op0[5]) = 0x0000e282;
+  *((int*)& __m256_op0[4]) = 0x1d20ffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0xfdfcfda8;
+  *((int*)& __m256_op0[1]) = 0x0000e282;
+  *((int*)& __m256_op0[0]) = 0x1d20ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000001c9880;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000001c9880;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_l_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffefffe;
+  *((int*)& __m256_op0[6]) = 0xfffefffe;
+  *((int*)& __m256_op0[5]) = 0xfffefffe;
+  *((int*)& __m256_op0[4]) = 0xfffefffe;
+  *((int*)& __m256_op0[3]) = 0xfffefffe;
+  *((int*)& __m256_op0[2]) = 0xfffefffe;
+  *((int*)& __m256_op0[1]) = 0xfffefffe;
+  *((int*)& __m256_op0[0]) = 0xfffefffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000200;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000200;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000200;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000200;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffffff1;
+  *((int*)& __m256_op0[6]) = 0xfffffff1;
+  *((int*)& __m256_op0[5]) = 0xfffffff1;
+  *((int*)& __m256_op0[4]) = 0xfffffff1;
+  *((int*)& __m256_op0[3]) = 0xfffffff1;
+  *((int*)& __m256_op0[2]) = 0xfffffff1;
+  *((int*)& __m256_op0[1]) = 0xfffffff1;
+  *((int*)& __m256_op0[0]) = 0xfffffff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x43ef8787;
+  *((int*)& __m256_op0[4]) = 0x8000ffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x43ef8787;
+  *((int*)& __m256_op0[0]) = 0x8000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000001df00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000001df00000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0x00030005;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0x00030005;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x7ff80000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x7ff80000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x7ff80000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x7ff80000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000002;
+  *((int*)& __m256_op0[6]) = 0x00000002;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000002;
+  *((int*)& __m256_op0[2]) = 0x00000002;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x7ff00000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x7ff00000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x7ff00000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x7ff00000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00016e00;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00016e00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_wu_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x38a966b301f41ffd;
+  *((unsigned long*)& __m256d_op0[2]) = 0x5f6108ee13ff0000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xf41a56e8d10201f6;
+  *((unsigned long*)& __m256d_op0[0]) = 0x683b8b34f1020001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256d_op0[2]) = 0xc2c2c2c2c2c29cc0;
+  *((unsigned long*)& __m256d_op0[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256d_op0[0]) = 0xc2c2c2c2c2c29cc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x00000000007a00f8;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00ff00ff01640092;
+  *((unsigned long*)& __m256d_op0[1]) = 0x00000000007a00f8;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00ff00ff01640092;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000000007fff80fe;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000007fff80fe;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000ffff80007ffe;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000ff007fff80fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000408080c111414;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000408080c111414;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000408080c111414;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000008e8c000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000000fffc000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000008e8c000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000000000fffc000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_lu_d(__m256d_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000200000003;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0080000200000003;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00ff00ffff0000ff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00ff00ffff0000ff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7fe363637fe36364;
+  *((unsigned long*)& __m256d_op0[1]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7fe363637fe36364;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op0[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256d_op0[2]) = 0x000000020000000b;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256d_op0[0]) = 0x000000020000000a;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x000000000000000a;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x000000000000000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftint_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000505;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256d_op1[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256d_op1[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256d_op1[0]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000004040104;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffd1108199;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000714910f9;
+  *((unsigned long*)& __m256d_op1[3]) = 0x000000030000000c;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000001100000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000500000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000800000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m256d_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256d_op0[1]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x00000000000007c8;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000001fe01fe;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000ff0100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x000000010000685e;
+  *((unsigned long*)& __m256d_op1[2]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256d_op1[1]) = 0x000000010000685e;
+  *((unsigned long*)& __m256d_op1[0]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256d_op1[2]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256d_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256d_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrne_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfffffff0ffff0000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op1[2]) = 0x3ff1808001020101;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256d_op1[0]) = 0x3ff1808001020101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x34000000fff00000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfff6e00000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x3380000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x363c0000fff3c000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x000000030000000c;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000001100000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000500000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000800000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0xa5a5a5a5a5a5a5a5;
+  *((unsigned long*)& __m256d_op1[2]) = 0xa5a5a5a5a5a5a5ff;
+  *((unsigned long*)& __m256d_op1[1]) = 0xa5a5a5a5a5a5a5a5;
+  *((unsigned long*)& __m256d_op1[0]) = 0xa5a5a5a5a5a5a5ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0b085bfc00000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0b004bc000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0b085bfc00000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0b004bc000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256d_op0[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrz_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0010001000107878;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0010001000107878;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0040000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0040000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0040000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0040000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256d_op0[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256d_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00003fea00013fec;
+  *((unsigned long*)& __m256d_op1[2]) = 0x00003fe50001c013;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00003fea00013fec;
+  *((unsigned long*)& __m256d_op1[0]) = 0x00003fe50001c013;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000180000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000180000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvftintrp_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffff000000010000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000095120000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xc9da000063f50000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xc7387fff6bbfffff;
+  *((unsigned long*)& __m256d_op1[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x4001000100020000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256d_op0[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256d_op1[2]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256d_op1[0]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256d_op1[2]) = 0x8000000100000001;
+  *((unsigned long*)& __m256d_op1[1]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256d_op1[0]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x0080000000800000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x0080000000800000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x0080000000800000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_op0[1]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256d_op1[3]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256d_op1[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256d_op1[1]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256d_op1[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256d_op0[2]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256d_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256d_op0[0]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256d_op1[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256d_op1[2]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256d_op1[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256d_op1[0]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256d_op0[2]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256d_op0[1]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256d_op0[0]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256d_op1[3]) = 0x003f020001400200;
+  *((unsigned long*)& __m256d_op1[2]) = 0x003f00ff003f00c4;
+  *((unsigned long*)& __m256d_op1[1]) = 0x003f020001400200;
+  *((unsigned long*)& __m256d_op1[0]) = 0x003f00ff003f00c4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrm_w_d(__m256d_op0,__m256d_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xc58a0a0a;
+  *((int*)& __m256_op0[6]) = 0x07070706;
+  *((int*)& __m256_op0[5]) = 0x006b60e4;
+  *((int*)& __m256_op0[4]) = 0x180b0023;
+  *((int*)& __m256_op0[3]) = 0x1b39153f;
+  *((int*)& __m256_op0[2]) = 0x334b966a;
+  *((int*)& __m256_op0[1]) = 0xf1d75d79;
+  *((int*)& __m256_op0[0]) = 0xefcac002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x40404040;
+  *((int*)& __m256_op0[6]) = 0x40404040;
+  *((int*)& __m256_op0[5]) = 0x40404040;
+  *((int*)& __m256_op0[4]) = 0x40404040;
+  *((int*)& __m256_op0[3]) = 0x40404040;
+  *((int*)& __m256_op0[2]) = 0x40404040;
+  *((int*)& __m256_op0[1]) = 0x40404040;
+  *((int*)& __m256_op0[0]) = 0x40404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000003;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00080000;
+  *((int*)& __m256_op0[4]) = 0x00000010;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00080000;
+  *((int*)& __m256_op0[0]) = 0x00000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x40f69fe6;
+  *((int*)& __m256_op0[6]) = 0x3c26f4f5;
+  *((int*)& __m256_op0[5]) = 0x7ff7ffff;
+  *((int*)& __m256_op0[4]) = 0x00000007;
+  *((int*)& __m256_op0[3]) = 0x40f69fe6;
+  *((int*)& __m256_op0[2]) = 0x3c26f4f5;
+  *((int*)& __m256_op0[1]) = 0x7ff7ffff;
+  *((int*)& __m256_op0[0]) = 0x00000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00060000;
+  *((int*)& __m256_op0[6]) = 0x00040000;
+  *((int*)& __m256_op0[5]) = 0x00020000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00060000;
+  *((int*)& __m256_op0[2]) = 0x00040000;
+  *((int*)& __m256_op0[1]) = 0x00020000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffff0000;
+  *((int*)& __m256_op0[4]) = 0xffff0000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffff0000;
+  *((int*)& __m256_op0[0]) = 0xffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x55550000;
+  *((int*)& __m256_op0[6]) = 0x55550000;
+  *((int*)& __m256_op0[5]) = 0x55550000;
+  *((int*)& __m256_op0[4]) = 0x55550000;
+  *((int*)& __m256_op0[3]) = 0x55550000;
+  *((int*)& __m256_op0[2]) = 0x55550000;
+  *((int*)& __m256_op0[1]) = 0x55550000;
+  *((int*)& __m256_op0[0]) = 0x55550000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000d5000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000d5000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000d5000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000d5000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x007f8080;
+  *((int*)& __m256_op0[6]) = 0x007f007f;
+  *((int*)& __m256_op0[5]) = 0x007f8080;
+  *((int*)& __m256_op0[4]) = 0x007f007f;
+  *((int*)& __m256_op0[3]) = 0x007f8080;
+  *((int*)& __m256_op0[2]) = 0x007f007f;
+  *((int*)& __m256_op0[1]) = 0x007f8080;
+  *((int*)& __m256_op0[0]) = 0x007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x08e8c000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x0fffc000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x08e8c000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x0fffc000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftinth_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000002;
+  *((int*)& __m256_op0[4]) = 0x00000008;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000002;
+  *((int*)& __m256_op0[0]) = 0x00000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x7f1d7f7f;
+  *((int*)& __m256_op0[6]) = 0x7f1d7f3b;
+  *((int*)& __m256_op0[5]) = 0x02020102;
+  *((int*)& __m256_op0[4]) = 0x02020102;
+  *((int*)& __m256_op0[3]) = 0x7f1d7f7f;
+  *((int*)& __m256_op0[2]) = 0x7f1d7f3b;
+  *((int*)& __m256_op0[1]) = 0x02020102;
+  *((int*)& __m256_op0[0]) = 0x02020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrnel_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00fffefe;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xfffffffc;
+  *((int*)& __m256_op0[4]) = 0x5556aaa8;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xfffffffc;
+  *((int*)& __m256_op0[0]) = 0x5556aaa8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffcc80;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x7dfdff4b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x002a5429;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x002a5429;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x77777777;
+  *((int*)& __m256_op0[6]) = 0xf7777777;
+  *((int*)& __m256_op0[5]) = 0xf7777777;
+  *((int*)& __m256_op0[4]) = 0x77777777;
+  *((int*)& __m256_op0[3]) = 0x77777777;
+  *((int*)& __m256_op0[2]) = 0xf7777777;
+  *((int*)& __m256_op0[1]) = 0xf7777777;
+  *((int*)& __m256_op0[0]) = 0x77777777;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000009;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000009;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000009;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x010c7fbc;
+  *((int*)& __m256_op0[6]) = 0x7e1c7e1c;
+  *((int*)& __m256_op0[5]) = 0xfe000000;
+  *((int*)& __m256_op0[4]) = 0x00000024;
+  *((int*)& __m256_op0[3]) = 0x010c7fbc;
+  *((int*)& __m256_op0[2]) = 0x7e1c7e1c;
+  *((int*)& __m256_op0[1]) = 0xfe000000;
+  *((int*)& __m256_op0[0]) = 0x00000024;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffffe20;
+  *((int*)& __m256_op0[6]) = 0x001dfe1f;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xfffffe20;
+  *((int*)& __m256_op0[2]) = 0x001dfe1f;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffe1;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffe1;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffe1;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffe1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000040;
+  *((int*)& __m256_op0[6]) = 0x00000020;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000040;
+  *((int*)& __m256_op0[2]) = 0x00000020;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrneh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x8b141414;
+  *((int*)& __m256_op0[4]) = 0x0e0e0e0e;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x36722a7e;
+  *((int*)& __m256_op0[0]) = 0x66972cd6;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x6a9e3f9a;
+  *((int*)& __m256_op0[4]) = 0x603a2001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x6a9e3f9a;
+  *((int*)& __m256_op0[0]) = 0x603a2001;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x0000fafe;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x0000fafe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00002262;
+  *((int*)& __m256_op0[6]) = 0x00005111;
+  *((int*)& __m256_op0[5]) = 0x0000165e;
+  *((int*)& __m256_op0[4]) = 0x0000480d;
+  *((int*)& __m256_op0[3]) = 0x00002262;
+  *((int*)& __m256_op0[2]) = 0x00005111;
+  *((int*)& __m256_op0[1]) = 0x0000165e;
+  *((int*)& __m256_op0[0]) = 0x0000480d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00040004;
+  *((int*)& __m256_op0[6]) = 0x00040004;
+  *((int*)& __m256_op0[5]) = 0x00040005;
+  *((int*)& __m256_op0[4]) = 0x00040005;
+  *((int*)& __m256_op0[3]) = 0x00040004;
+  *((int*)& __m256_op0[2]) = 0x00040004;
+  *((int*)& __m256_op0[1]) = 0x00040005;
+  *((int*)& __m256_op0[0]) = 0x00040005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrzh_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000102;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0x39ffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0x39ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x80000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x80000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x80000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x80000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x000055ff;
+  *((int*)& __m256_op0[6]) = 0x01f90ab5;
+  *((int*)& __m256_op0[5]) = 0xaa95eaff;
+  *((int*)& __m256_op0[4]) = 0xfec6e01f;
+  *((int*)& __m256_op0[3]) = 0x000055ff;
+  *((int*)& __m256_op0[2]) = 0x01f90ab5;
+  *((int*)& __m256_op0[1]) = 0xaa95eaff;
+  *((int*)& __m256_op0[0]) = 0xfec6e01f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xfffeb683;
+  *((int*)& __m256_op0[6]) = 0x9ffffd80;
+  *((int*)& __m256_op0[5]) = 0xfffe97c0;
+  *((int*)& __m256_op0[4]) = 0x20010001;
+  *((int*)& __m256_op0[3]) = 0xfffeb683;
+  *((int*)& __m256_op0[2]) = 0x9ffffd80;
+  *((int*)& __m256_op0[1]) = 0xfffe97c0;
+  *((int*)& __m256_op0[0]) = 0x20010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvftintrpl_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xfefefeff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xff295329;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xfefefeff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xff295329;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xff00ffff;
+  *((int*)& __m256_op0[6]) = 0xff00ffff;
+  *((int*)& __m256_op0[5]) = 0xff00ffff;
+  *((int*)& __m256_op0[4]) = 0xff00ffff;
+  *((int*)& __m256_op0[3]) = 0xff00ffff;
+  *((int*)& __m256_op0[2]) = 0xff00ffff;
+  *((int*)& __m256_op0[1]) = 0xff00ffff;
+  *((int*)& __m256_op0[0]) = 0xff00ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x7fefffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x7fefffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x02020102;
+  *((int*)& __m256_op0[6]) = 0x02020102;
+  *((int*)& __m256_op0[5]) = 0x02020102;
+  *((int*)& __m256_op0[4]) = 0x02020102;
+  *((int*)& __m256_op0[3]) = 0x02020102;
+  *((int*)& __m256_op0[2]) = 0x02020102;
+  *((int*)& __m256_op0[1]) = 0x02020102;
+  *((int*)& __m256_op0[0]) = 0x02020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000001;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000001;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvftintrph_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x000000ff;
+  *((int*)& __m256_op0[6]) = 0x000000f8;
+  *((int*)& __m256_op0[5]) = 0xbc8ff0ff;
+  *((int*)& __m256_op0[4]) = 0xffffcff8;
+  *((int*)& __m256_op0[3]) = 0x000000ff;
+  *((int*)& __m256_op0[2]) = 0x000000f8;
+  *((int*)& __m256_op0[1]) = 0xbc8ff0ff;
+  *((int*)& __m256_op0[0]) = 0xffffcff8;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000001;
+  *((int*)& __m256_op0[6]) = 0x00000001;
+  *((int*)& __m256_op0[5]) = 0x00000001;
+  *((int*)& __m256_op0[4]) = 0x00000001;
+  *((int*)& __m256_op0[3]) = 0x00000001;
+  *((int*)& __m256_op0[2]) = 0x00000001;
+  *((int*)& __m256_op0[1]) = 0x00000001;
+  *((int*)& __m256_op0[0]) = 0x00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+  *((int*)& __m256_op0[6]) = 0xffffffff;
+  *((int*)& __m256_op0[5]) = 0xffffffff;
+  *((int*)& __m256_op0[4]) = 0xffffffff;
+  *((int*)& __m256_op0[3]) = 0xffffffff;
+  *((int*)& __m256_op0[2]) = 0xffffffff;
+  *((int*)& __m256_op0[1]) = 0xffffffff;
+  *((int*)& __m256_op0[0]) = 0xffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x7fe37fe3;
+  *((int*)& __m256_op0[6]) = 0x001d001d;
+  *((int*)& __m256_op0[5]) = 0x7fff7fff;
+  *((int*)& __m256_op0[4]) = 0x7fff0000;
+  *((int*)& __m256_op0[3]) = 0x7fe37fe3;
+  *((int*)& __m256_op0[2]) = 0x001d001d;
+  *((int*)& __m256_op0[1]) = 0x7fff7fff;
+  *((int*)& __m256_op0[0]) = 0x7fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000010;
+  *((int*)& __m256_op0[6]) = 0x00000010;
+  *((int*)& __m256_op0[5]) = 0x00000010;
+  *((int*)& __m256_op0[4]) = 0x00000010;
+  *((int*)& __m256_op0[3]) = 0x00000010;
+  *((int*)& __m256_op0[2]) = 0x00000010;
+  *((int*)& __m256_op0[1]) = 0x00000010;
+  *((int*)& __m256_op0[0]) = 0x00000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((int*)& __m256_op0[7]) = 0x00000000;
+  *((int*)& __m256_op0[6]) = 0x00000000;
+  *((int*)& __m256_op0[5]) = 0x00000000;
+  *((int*)& __m256_op0[4]) = 0x00000000;
+  *((int*)& __m256_op0[3]) = 0x00000000;
+  *((int*)& __m256_op0[2]) = 0x00000000;
+  *((int*)& __m256_op0[1]) = 0x00000000;
+  *((int*)& __m256_op0[0]) = 0x00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvftintrml_l_s(__m256_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc5c085372cfabfba;
+  *((unsigned long*)& __m256i_op0[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0658f2dc0eb21e3c;
+  *((int*)& __m256_result[7]) = 0x4e5cba76;
+  *((int*)& __m256_result[6]) = 0xcdbaaa78;
+  *((int*)& __m256_result[5]) = 0xce68fdeb;
+  *((int*)& __m256_result[4]) = 0x4e33eaff;
+  *((int*)& __m256_result[3]) = 0x4e45cc2d;
+  *((int*)& __m256_result[2]) = 0xcda41b30;
+  *((int*)& __m256_result[1]) = 0x4ccb1e5c;
+  *((int*)& __m256_result[0]) = 0x4d6b21e4;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffbf7f00007fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffe651ffffbfff;
+  *((int*)& __m256_result[7]) = 0x4f800000;
+  *((int*)& __m256_result[6]) = 0x4f800000;
+  *((int*)& __m256_result[5]) = 0x4f7fffbf;
+  *((int*)& __m256_result[4]) = 0x46fffe00;
+  *((int*)& __m256_result[3]) = 0x4f800000;
+  *((int*)& __m256_result[2]) = 0x4f800000;
+  *((int*)& __m256_result[1]) = 0x4f7fffe6;
+  *((int*)& __m256_result[0]) = 0x4f7fffc0;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((int*)& __m256_result[7]) = 0x4b808080;
+  *((int*)& __m256_result[6]) = 0x4b808080;
+  *((int*)& __m256_result[5]) = 0x4f800000;
+  *((int*)& __m256_result[4]) = 0x4f7fffff;
+  *((int*)& __m256_result[3]) = 0x4b808080;
+  *((int*)& __m256_result[2]) = 0x4b808080;
+  *((int*)& __m256_result[1]) = 0x4f800000;
+  *((int*)& __m256_result[0]) = 0x4f800000;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x41000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x41000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x41000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x41000000;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000020;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x42800000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x42000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x42800000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x42000000;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x4efffe00;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x47000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x4efffe00;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x47000000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff00;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x477f0000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x477f0000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010001000030000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010001000030000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010001000030000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010001000030000;
+  *((int*)& __m256_result[7]) = 0x49800080;
+  *((int*)& __m256_result[6]) = 0x48400000;
+  *((int*)& __m256_result[5]) = 0x49800080;
+  *((int*)& __m256_result[4]) = 0x48400000;
+  *((int*)& __m256_result[3]) = 0x49800080;
+  *((int*)& __m256_result[2]) = 0x48400000;
+  *((int*)& __m256_result[1]) = 0x49800080;
+  *((int*)& __m256_result[0]) = 0x48400000;
+  __m256_out = __lasx_xvffint_s_w(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_wu(__m256i_op0);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xbff0000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x4370100000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x4370100000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001700080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001700080;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x4177000800000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x4177000800000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256d_result[3]) = 0x43c0101010101010;
+  *((unsigned long*)& __m256d_result[2]) = 0x43c0101010101032;
+  *((unsigned long*)& __m256d_result[1]) = 0x43c0101010101010;
+  *((unsigned long*)& __m256d_result[0]) = 0x43c0101010101032;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40efffe09fa88260;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6b07ca8e013fbf01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40efffe09fa7e358;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80ce32be3e827f00;
+  *((unsigned long*)& __m256d_result[3]) = 0x43d03bfff827ea21;
+  *((unsigned long*)& __m256d_result[2]) = 0x43dac1f2a3804ff0;
+  *((unsigned long*)& __m256d_result[1]) = 0x43d03bfff827e9f9;
+  *((unsigned long*)& __m256d_result[0]) = 0x43e019c657c7d050;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0x43f0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x43f0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x43f0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x43f0000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x41f0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x41f0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x41f0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x41f0000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xc1f0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xc1f0000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256d_result[3]) = 0x437fe01fe01fe020;
+  *((unsigned long*)& __m256d_result[2]) = 0x437fe01fe01fe020;
+  *((unsigned long*)& __m256d_result[1]) = 0x437fe01fe01fe020;
+  *((unsigned long*)& __m256d_result[0]) = 0x437fe01fe01fe020;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op0[2]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op0[0]) = 0x132feea900000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x4393a0a5bc606060;
+  *((unsigned long*)& __m256d_result[2]) = 0x43b32feea9000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x4393a0a5bc606060;
+  *((unsigned long*)& __m256d_result[0]) = 0x43b32feea9000000;
+  __m256d_out = __lasx_xvffint_d_l(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256d_result[3]) = 0x4380100810101008;
+  *((unsigned long*)& __m256d_result[2]) = 0x4380100810101008;
+  *((unsigned long*)& __m256d_result[1]) = 0x4380100810101008;
+  *((unsigned long*)& __m256d_result[0]) = 0x4380100810101008;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x41f0000000000000;
+  __m256d_out = __lasx_xvffint_d_lu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_l(__m256i_op0,__m256i_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x4f800000;
+  __m256_out = __lasx_xvffint_s_l(__m256i_op0,__m256i_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc74180000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff884580000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0xbf800000;
+  *((int*)& __m256_result[6]) = 0xbf800000;
+  *((int*)& __m256_result[5]) = 0xd662fa00;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0xbf800000;
+  *((int*)& __m256_result[2]) = 0xbf800000;
+  *((int*)& __m256_result[1]) = 0xd6ef7500;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_l(__m256i_op0,__m256i_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x00000000;
+  *((int*)& __m256_result[6]) = 0x00000000;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x00000000;
+  *((int*)& __m256_result[2]) = 0x00000000;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_l(__m256i_op0,__m256i_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000005000000020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000005000000020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00040000;
+  *((int*)& __m256_result[7]) = 0xdf000000;
+  *((int*)& __m256_result[6]) = 0x52a00000;
+  *((int*)& __m256_result[5]) = 0x5b7f00ff;
+  *((int*)& __m256_result[4]) = 0x5b7f00ff;
+  *((int*)& __m256_result[3]) = 0xdf000000;
+  *((int*)& __m256_result[2]) = 0x52a00000;
+  *((int*)& __m256_result[1]) = 0x5b7f00ff;
+  *((int*)& __m256_result[0]) = 0x5b7f00ff;
+  __m256_out = __lasx_xvffint_s_l(__m256i_op0,__m256i_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((int*)& __m256_result[7]) = 0x5d20a0a1;
+  *((int*)& __m256_result[6]) = 0x5d20a0a1;
+  *((int*)& __m256_result[5]) = 0x00000000;
+  *((int*)& __m256_result[4]) = 0x00000000;
+  *((int*)& __m256_result[3]) = 0x5d20a0a1;
+  *((int*)& __m256_result[2]) = 0x5d20a0a1;
+  *((int*)& __m256_result[1]) = 0x00000000;
+  *((int*)& __m256_result[0]) = 0x00000000;
+  __m256_out = __lasx_xvffint_s_l(__m256i_op0,__m256i_op1);
+  ASSERTEQ_32(__LINE__, __m256_result, __m256_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x41d6600000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x41d6600000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256d_result[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256d_result[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256d_result[0]) = 0xc1d75053f0000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000001f;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x403f000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x403f000000000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00f7000000f70006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00f7000000f70006;
+  *((unsigned long*)& __m256d_result[3]) = 0x416ee00000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x416ee000c0000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x416ee00000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x416ee000c0000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff000000000080;
+  *((unsigned long*)& __m256d_result[3]) = 0x416fe00000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x4060000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x416fe00000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x4060000000000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x41cfe01dde000000;
+  __m256d_out = __lasx_xvffintl_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0e2d5626ff75cdbc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5db4b156e2002a78;
+  *((unsigned long*)& __m256i_op0[1]) = 0xeeffbeb03ba3e6b0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0c16e25eb28d27ea;
+  *((unsigned long*)& __m256d_result[3]) = 0x41ac5aac4c000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xc161464880000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xc1b1004150000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x41cdd1f358000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000006f0000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000006f0000007f;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256d_result[3]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x41d8585858400000;
+  *((unsigned long*)& __m256d_result[1]) = 0xc1be9e9e9f000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x41d8585858400000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x41dfffc000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x41dfffdfffc00000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000007f3a40;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffb79fb74;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffb79fb74;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256d_result[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xc192181230000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xc192181230000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256d_result[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xbff0000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ffffffff00;
+  *((unsigned long*)& __m256d_result[3]) = 0x40efffe000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x40efffe000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x41dffc0000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x41dffc0000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256d_result[3]) = 0xc039000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0xc039000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0xc039000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0xc039000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256d_result[0]) = 0x0000000000000000;
+  __m256d_out = __lasx_xvffinth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256d_result, __m256d_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-int-arith.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-int-arith.c
new file mode 100644
index 00000000000..e6b47e32d63
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-int-arith.c
@@ -0,0 +1,38361 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000ffff8000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x06f880008000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800080008000b8f1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000010180000101;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfa08800080000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x800080008000480f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001010000010100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101000000010100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000000010100;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffff605a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffff605a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ffffffffff605a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ffffffffff605a;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_op0[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_op0[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_result[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_result[1]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_result[0]) = 0x55555555aaaaaaac;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffefefffffefe;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffefffefffefffe;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000089;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fff3fff3fff4000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000403f3fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3fff3fff3fff4000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000403f3fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007ffe7ffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ffe7ffe7ffe8000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000807e7ffe;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d0005;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op1[3]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6161616161616161;
+  *((unsigned long*)& __m256i_result[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[2]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[0]) = 0xc2c2c2c2c2c2c2c2;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000c0000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000040000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020001f001f001e;
+  *((unsigned long*)& __m256i_result[2]) = 0x001f001fc01f001f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020001f001f001e;
+  *((unsigned long*)& __m256i_result[0]) = 0x001f001f401f001f;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f8000007f7fffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f8000007f7fffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f8000007f7fffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f8000007f7fffff;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff900000800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff900000800;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f807f007f7f817f;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000014402080144;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7ffeffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7ffeffffffff;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000008;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000006040190d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000006040190d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000860601934;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000860601934;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800200028;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_result[2]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_result[0]) = 0x00e9a80014ff0000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202010202020102;
+  __m256i_out = __lasx_xvadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800000ff800000ff;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000956a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000956a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xb500000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xb500000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x007fffffffff9569;
+  *((unsigned long*)& __m256i_result[2]) = 0xb50000004efffe00;
+  *((unsigned long*)& __m256i_result[1]) = 0x007fffffffff9569;
+  *((unsigned long*)& __m256i_result[0]) = 0xb50000004efffe00;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff01;
+  __m256i_out = __lasx_xvadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x41cfe01dde000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x41cfe01dde000000;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010000080040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000080040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010000080040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000080040;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004000000040;
+  __m256i_out = __lasx_xvadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffeffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffeffff0000;
+  __m256i_out = __lasx_xvadd_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xc5c5c5c5c5c5c5c5;
+  *((unsigned long*)& __m256i_result[2]) = 0x45c5c5c645c5c5c6;
+  *((unsigned long*)& __m256i_result[1]) = 0xc5c5c5c5c5c5c5c5;
+  *((unsigned long*)& __m256i_result[0]) = 0x45c5c5c645c5c5c6;
+  __m256i_out = __lasx_xvsub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000006f0000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000006f0000007f;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff010000fff9;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff19;
+  *((unsigned long*)& __m256i_result[1]) = 0xff02ff020001fffa;
+  *((unsigned long*)& __m256i_result[0]) = 0x000100010001fffa;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808080808081;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808081;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe1616161e1614e60;
+  *((unsigned long*)& __m256i_result[3]) = 0x1f9d9f9d1f9db29f;
+  *((unsigned long*)& __m256i_result[2]) = 0x1f9d9f9d201cb39e;
+  *((unsigned long*)& __m256i_result[1]) = 0x201c9f9d201cb29f;
+  *((unsigned long*)& __m256i_result[0]) = 0x1f9d9f9d201cb39e;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe0f02081c1c4ce2c;
+  *((unsigned long*)& __m256i_result[2]) = 0x8008000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xe0f02081c1c4ce2c;
+  *((unsigned long*)& __m256i_result[0]) = 0x8008000000000000;
+  __m256i_out = __lasx_xvsub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010100000000;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc192181230000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc192181230000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4010000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3e6ce7d9cb7afb62;
+  *((unsigned long*)& __m256i_result[1]) = 0x4010000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3e6ce7d9cb7afb62;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffbe20fc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000001cc7ee87;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000010bb83239;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000c409ed87;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0100020001bf1efd;
+  *((unsigned long*)& __m256i_result[2]) = 0x010002001ec8ec88;
+  *((unsigned long*)& __m256i_result[1]) = 0x010002010db9303a;
+  *((unsigned long*)& __m256i_result[0]) = 0x01000200c60aeb88;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x43d03bfff827ea21;
+  *((unsigned long*)& __m256i_op1[2]) = 0x43dac1f2a3804ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x43d03bfff827e9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43e019c657c7d050;
+  *((unsigned long*)& __m256i_result[3]) = 0xbc30c40107d915df;
+  *((unsigned long*)& __m256i_result[2]) = 0xbc263e0e5c80b010;
+  *((unsigned long*)& __m256i_result[1]) = 0xbc30c40107d91607;
+  *((unsigned long*)& __m256i_result[0]) = 0xbc20e63aa8392fb0;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0505070804040404;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0504070804040404;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0505070804040404;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0504070804040404;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ff000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ff000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0504080804030405;
+  *((unsigned long*)& __m256i_result[2]) = 0x0504060904040305;
+  *((unsigned long*)& __m256i_result[1]) = 0x0504080804030405;
+  *((unsigned long*)& __m256i_result[0]) = 0x0504060904040305;
+  __m256i_out = __lasx_xvsub_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010200000000;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_result[2]) = 0xff21c241ff21c238;
+  *((unsigned long*)& __m256i_result[1]) = 0xff21c241ff21c241;
+  *((unsigned long*)& __m256i_result[0]) = 0xff21c241ff21c238;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_result[2]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_result[0]) = 0x7e00000000000000;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000001c9880;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000001c9880;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000100000001;
+  __m256i_out = __lasx_xvsub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f8f7f8f7f8f7f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f8f7f8f7f8f7f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000004000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000040004000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000004000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff003f003f00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff0101fd00010100;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff003f003f00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff0101fd00010100;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00b213171dff0606;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00e9a80014ff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00b213181dff0607;
+  *((unsigned long*)& __m256i_result[2]) = 0x00e9a80114ff0001;
+  *((unsigned long*)& __m256i_result[1]) = 0x00b213181dff0607;
+  *((unsigned long*)& __m256i_result[0]) = 0x00e9a80114ff0001;
+  __m256i_out = __lasx_xvsub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000e000e000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000e000e000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000e000e000e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000e000e000e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000e0000000d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffed;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffefff80;
+  __m256i_out = __lasx_xvsub_q(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000fdfdfe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001fffe00010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ffe0001fffe0001;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ffe0001fffeffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000fdfdfe;
+  __m256i_out = __lasx_xvsub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x90007fff90008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0ffffffe90008000;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x207f207f207f2000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000207f2000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[3]) = 0xdf80df80df80dfff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffdf80dfff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8080808080808080;
+  __m256i_out = __lasx_xvsub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff80000000;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfa15fa15fa15fa14;
+  __m256i_out = __lasx_xvsub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000019410000e69a;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf259905a0c126604;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000883a00000f20;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6d3c2d3aa1c82947;
+  *((unsigned long*)& __m256i_result[3]) = 0x000019410000e6aa;
+  *((unsigned long*)& __m256i_result[2]) = 0xf259905a0c126614;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000883a00000f30;
+  *((unsigned long*)& __m256i_result[0]) = 0x6d3c2d3aa1c82957;
+  __m256i_out = __lasx_xvaddi_du(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x44bb2cd3a35c2fd0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xca355ba46a95e31c;
+  *((unsigned long*)& __m256i_result[3]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_result[2]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_result[1]) = 0x61d849f0c0794ced;
+  *((unsigned long*)& __m256i_result[0]) = 0xe75278c187b20039;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe651bfff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_result[2]) = 0x1d1d1d1ddd9d9d1d;
+  *((unsigned long*)& __m256i_result[1]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_result[0]) = 0x1d1d1d1d046fdd1d;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_result[2]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_result[1]) = 0x1515151515151515;
+  *((unsigned long*)& __m256i_result[0]) = 0x1515151515151515;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1818181818181818;
+  *((unsigned long*)& __m256i_result[2]) = 0x1818181818181818;
+  *((unsigned long*)& __m256i_result[1]) = 0x1818181818181818;
+  *((unsigned long*)& __m256i_result[0]) = 0x1818181818181818;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[2]) = 0x5982000200020002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[0]) = 0x5982000200020002;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202810102020202;
+  *((unsigned long*)& __m256i_result[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202810102020202;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[3]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x001f001f02c442af;
+  *((unsigned long*)& __m256i_result[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x001f001f02c442af;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000100010;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001900000019;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x807e80fd80fe80fd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80938013800d8002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x807e80fd80fe0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80938013800d0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x8091811081118110;
+  *((unsigned long*)& __m256i_result[2]) = 0x80a6802680208015;
+  *((unsigned long*)& __m256i_result[1]) = 0x8091811081110013;
+  *((unsigned long*)& __m256i_result[0]) = 0x80a6802680200018;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000003f00390035;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8015003f0006001f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000003f00390035;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8015003f0006001f;
+  *((unsigned long*)& __m256i_result[3]) = 0x000b004a00440040;
+  *((unsigned long*)& __m256i_result[2]) = 0x8020004a0011002a;
+  *((unsigned long*)& __m256i_result[1]) = 0x000b004a00440040;
+  *((unsigned long*)& __m256i_result[0]) = 0x8020004a0011002a;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000d;
+  __m256i_out = __lasx_xvaddi_du(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[2]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[1]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[0]) = 0x0909090909090909;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_result[2]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_result[1]) = 0x0016001600160016;
+  *((unsigned long*)& __m256i_result[0]) = 0x0016001600160016;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000600000006;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001a0000001a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001a0000001a;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ffce20;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ffce20;
+  *((unsigned long*)& __m256i_result[3]) = 0x1514151415141514;
+  *((unsigned long*)& __m256i_result[2]) = 0x151415141514e335;
+  *((unsigned long*)& __m256i_result[1]) = 0x1514151415141514;
+  *((unsigned long*)& __m256i_result[0]) = 0x151415141514e335;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_result[2]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_result[1]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_result[0]) = 0x0606060606060606;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff0fff0ff01ff14;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff0fff0fff10003;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff0fff0ff01ff14;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff0fff0fff10003;
+  __m256i_out = __lasx_xvaddi_du(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff47b4ffff5879;
+  __m256i_out = __lasx_xvaddi_du(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001900000019;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1212121212121212;
+  *((unsigned long*)& __m256i_result[2]) = 0x1212121212121212;
+  *((unsigned long*)& __m256i_result[1]) = 0x1212121212121212;
+  *((unsigned long*)& __m256i_result[0]) = 0x1212121212121212;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa1a1a1a1a1a1a1a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xa1a1a1a15e5e5e5e;
+  *((unsigned long*)& __m256i_result[3]) = 0xa1bfa1bfa1bfa1bf;
+  *((unsigned long*)& __m256i_result[2]) = 0xa1bfa1bf5e7c5e7c;
+  *((unsigned long*)& __m256i_result[1]) = 0xa1bfa1bfa1bfa1bf;
+  *((unsigned long*)& __m256i_result[0]) = 0xa1bfa1bf5e7c5e7c;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0808080808080808;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001d0000001d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001d0000001d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001d0000001d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001d0000001d;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvaddi_du(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000600000006;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_result[3]) = 0x001a001a001a009a;
+  *((unsigned long*)& __m256i_result[2]) = 0x001a001a002a009a;
+  *((unsigned long*)& __m256i_result[1]) = 0x001a001a001a009a;
+  *((unsigned long*)& __m256i_result[0]) = 0x001a001a002a009a;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x001c001c001c001c;
+  *((unsigned long*)& __m256i_result[2]) = 0x001c001c001c001c;
+  *((unsigned long*)& __m256i_result[1]) = 0x001c001c001c001c;
+  *((unsigned long*)& __m256i_result[0]) = 0x001c001c001d001d;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0fffffff10000006;
+  __m256i_out = __lasx_xvaddi_du(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x721e001e721e001e;
+  *((unsigned long*)& __m256i_result[2]) = 0x721e001e721e001e;
+  *((unsigned long*)& __m256i_result[1]) = 0x721e001e721e001e;
+  *((unsigned long*)& __m256i_result[0]) = 0x721e001e721e001e;
+  __m256i_out = __lasx_xvaddi_hu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001200000012;
+  *((unsigned long*)& __m256i_result[3]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_result[2]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_result[1]) = 0x1a1a1a2c1a1a1a2c;
+  *((unsigned long*)& __m256i_result[0]) = 0x1a1a1a2c1a1a1a2c;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000001fffd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000001fffd;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000700020004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000700020004;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000008;
+  __m256i_out = __lasx_xvaddi_wu(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x1d1d1d1e1d1d1d1e;
+  *((unsigned long*)& __m256i_result[2]) = 0x1d1d1d1e1d1d1d1e;
+  *((unsigned long*)& __m256i_result[1]) = 0x1d1d1d1e1d1d1d1e;
+  *((unsigned long*)& __m256i_result[0]) = 0x1d1d1d1e1d1d1d1e;
+  __m256i_out = __lasx_xvaddi_bu(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffefb;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffefb;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fe;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[2]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[1]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[0]) = 0xe9e9e9e9e9e9e9e9;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffc0008001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffffc0008001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffc0008001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffffc0008001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffffc0007fe9;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffffc0007fe9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffc0007fe9;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffffc0007fe9;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffef000004ea;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffefffffffef;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffefffffffef;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0xf9f8f9f8f9f9f900;
+  *((unsigned long*)& __m256i_result[2]) = 0xf9f9f9f9f9f9f9e0;
+  *((unsigned long*)& __m256i_result[1]) = 0xf9f8f9f8f9f9f900;
+  *((unsigned long*)& __m256i_result[0]) = 0xf9f9f9f9f9f9f900;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xefefefefefefefef;
+  *((unsigned long*)& __m256i_result[2]) = 0xefefefefefefefef;
+  *((unsigned long*)& __m256i_result[1]) = 0xefefefefefefef6e;
+  *((unsigned long*)& __m256i_result[0]) = 0xeeeeeeeeeeeeeeee;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[2]) = 0x6aeaeaeaeaeaeaea;
+  *((unsigned long*)& __m256i_result[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[0]) = 0x6aeaeaeaeaeaeaea;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf6f6f6f6f6f6f6f6;
+  *((unsigned long*)& __m256i_result[2]) = 0xf6f6f6f6f6f6f6f6;
+  *((unsigned long*)& __m256i_result[1]) = 0xf6f6f6f6f6f6f6f6;
+  *((unsigned long*)& __m256i_result[0]) = 0xf6f6f6f6f6f6f6f6;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff6;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff7fff7fff7fff7;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff7fff7fff7fff7;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff7fff7fff7fff7;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff7fff7fff7fff7;
+  __m256i_out = __lasx_xvsubi_hu(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002a54290;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffecffffffec;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffee;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[2]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[1]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[0]) = 0xe7e7e7e7e7e7e7e7;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000018;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000018;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff30000000b;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff30000000b;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff3fffffff3;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff5fffffff5;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff5fffffff5;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff5fffffff5;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff5fffffff5;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_result[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_result[2]) = 0xdbcbdbcbdbcbdbcb;
+  *((unsigned long*)& __m256i_result[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_result[0]) = 0xdbcbdbcbdbcbdbcb;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffe5ffffffe5;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffe5ffffffe5;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffeaffffffea;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffeaffffffea;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffeaffffffea;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffeaffffffea;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5d20a0a15d20a0a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5d20a0a15d20a0a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x5d20a0895d20a089;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffe8ffffffe8;
+  *((unsigned long*)& __m256i_result[1]) = 0x5d20a0895d20a089;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffe8ffffffe8;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0e0d0c0b0e0d0c0b;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0e0d0c0b0e0d0c0b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0e0d0c0b0e0d0c0b;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0e0d0c0b0e0d0c0b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0a0908070a090807;
+  *((unsigned long*)& __m256i_result[2]) = 0x0a0908070a090807;
+  *((unsigned long*)& __m256i_result[1]) = 0x0a0908070a090807;
+  *((unsigned long*)& __m256i_result[0]) = 0x0a0908070a090807;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffe8ffffffe8;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffe8ffffffe8;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffe8ffffffe8;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffe8ffffffe8;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022be22be;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fffa2bea2be;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000022be22be;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fffa2bea2be;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe1ffe1229f229f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fe07fe0a29fa29f;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe1ffe1229f229f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fe07fe0a29fa29f;
+  __m256i_out = __lasx_xvsubi_hu(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe5ffe5ffe5ffe5;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe5ffe5ffe5ffe5;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe5ffe5ffe5ffe5;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe5ffe5ffe5ffe5;
+  __m256i_out = __lasx_xvsubi_hu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffe6;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffe6;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffe6;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffe6;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffe1;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff1fff1fff1fff1;
+  __m256i_out = __lasx_xvsubi_hu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[0]) = 0xf9f9f9f9f9f9f9f9;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_result[2]) = 0xf2f2f2f2f2f2f2f2;
+  *((unsigned long*)& __m256i_result[1]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_result[0]) = 0xf2f2f2f2f2f2f2f2;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffcfffcfffcfffc;
+  __m256i_out = __lasx_xvsubi_hu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[2]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[0]) = 0xebebebebebebebeb;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffcfffffffc;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffcfffffffc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffcfffffffc;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffcfffffffc;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000010006d;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000010006d;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffee;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb683007ffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c0df5b41cf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb683007ffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c0df5b41cf;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffeb664007ffd61;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe97a1df5b41b0;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffeb664007ffd61;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe97a1df5b41b0;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff4;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfefefefefdfdfdfd;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfefefefefdfdfdfd;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe4e4e4e4e4e4e4e4;
+  *((unsigned long*)& __m256i_result[2]) = 0xe4e4e4e4e4e4e4e4;
+  *((unsigned long*)& __m256i_result[1]) = 0xe4e4e4e4e4e4e4e4;
+  *((unsigned long*)& __m256i_result[0]) = 0xe4e4e4e4e4e4e4e4;
+  __m256i_out = __lasx_xvsubi_bu(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffed;
+  __m256i_out = __lasx_xvsubi_du(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffe7ffffffe7;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000400100004001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000400100004001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003ff000003ff0;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffe4ffffffe4;
+  __m256i_out = __lasx_xvsubi_wu(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x002e4db200000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000315ac0000d658;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00735278007cf94c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0003ed8800031b38;
+  *((unsigned long*)& __m256i_result[3]) = 0xffd1b24e00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcea54ffff29a8;
+  *((unsigned long*)& __m256i_result[1]) = 0xff8cad88ff8306b4;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffc1278fffce4c8;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000ffff8000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x06f880008000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800080008000b8f1;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000010180000101;
+  *((unsigned long*)& __m256i_result[2]) = 0xfa08800080000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800080008000480f;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010102;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010201010204;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010102;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010102;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010203;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007380;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000f1c00;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000800000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0081000100810001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0081000100810001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0081000100810001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0081000100810001;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3870ca8d013e76a0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ec0a1b2aba7ed0;
+  *((unsigned long*)& __m256i_result[3]) = 0xdec38a1061c87f01;
+  *((unsigned long*)& __m256i_result[2]) = 0xc8903673ffc28a60;
+  *((unsigned long*)& __m256i_result[1]) = 0xdec38a1061c91da9;
+  *((unsigned long*)& __m256i_result[0]) = 0xbd14f6e5d6468230;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000007e8080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fdda7dc4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000007e8080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fdda7dc4;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff827f80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0226823c;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff827f80;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0226823c;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000180000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000180000001;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000001;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff1000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff1000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000008000165a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000008000165a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff00017fff005d;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffe9a6;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff00017fff005d;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffe9a6;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff0100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff0100000001;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100004300000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100004300000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff0000bd00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff0000bd00000000;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000010000080040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000010000080040;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff00fff8ffc0;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000497fe0000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000683fe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000497fe0000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000683fe0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffb6811fffff80;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff97c120000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffb6811fffff80;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff97c120000000;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfefefefefdfdfdfd;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefefefefdfdfdfd;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010202020203;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010201010102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010202020203;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010201010102;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000032;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000032;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffce;
+  __m256i_out = __lasx_xvneg_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvneg_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00007fde00007fd4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007fe000007fe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007fde00007fd4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fe000007fe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x000081220000812c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000812000008120;
+  *((unsigned long*)& __m256i_result[1]) = 0x000081220000812c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000812000008120;
+  __m256i_out = __lasx_xvneg_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffd880;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffd880;
+  __m256i_out = __lasx_xvneg_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffc81aca;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003a0a9512;
+  *((unsigned long*)& __m256i_op0[1]) = 0x280ac9da313863f4;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe032c739adcc6bbd;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100020001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffdffffffc81aca;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff3a0b9512;
+  *((unsigned long*)& __m256i_result[1]) = 0x280bc9db313a63f5;
+  *((unsigned long*)& __m256i_result[0]) = 0xe032c738adcb6bbb;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffff8046867f79;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6651bfff80000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffff8046867f79;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_result[0]) = 0x6651bfff80000000;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000007f00000022;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000007f00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000007f00000022;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000007f00000000;
+  __m256i_out = __lasx_xvsadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_result[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_result[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_result[0]) = 0x45c5c5c545c5c5c5;
+  __m256i_out = __lasx_xvsadd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001700080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001700080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001700080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001700080;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[0]) = 0x1c1b1a191c1b1a19;
+  __m256i_out = __lasx_xvsadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvsadd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1fe01e0100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x1fe01e0100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf000f00000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf000f00000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6300000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xf000f00000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x6300000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xf000f00000000001;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffb7ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffb5ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffb7ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_result[1]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_result[0]) = 0xffb5ff80ffd0ffd8;
+  __m256i_out = __lasx_xvsadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000002a96ba;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000002a96ba;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf5f5f5f5f5f5f5f5;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf5f5f5f5f5f5f5f5;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xf9f5f9f5f9f5f9f5;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xf9f5f9f5f9f5f9f5;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7f7f7f7f7f7f7f7;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[3]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[2]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[1]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[0]) = 0x07efefefefefefee;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffffa;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000000;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff3eff3eff3eff3e;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_result[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0xa020202020206431;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000002;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000002;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4ffc3f79d20bf257;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffec6f90604bf;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4ffc3f79d20bf257;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffec6f90604bf;
+  *((unsigned long*)& __m256i_result[3]) = 0x4ffc3f79d20bf257;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffec6f90604bf;
+  *((unsigned long*)& __m256i_result[1]) = 0x4ffc3f79d20bf257;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffec6f90604bf;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x43d03bfff827ea21;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43dac1f2a3804ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x43d03bfff827e9f9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43e019c657c7d050;
+  *((unsigned long*)& __m256i_op1[3]) = 0x43d03bfff827ea21;
+  *((unsigned long*)& __m256i_op1[2]) = 0x43dac1f2a3804ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x43d03bfff827e9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43e019c657c7d050;
+  *((unsigned long*)& __m256i_result[3]) = 0x86ff76ffff4eff42;
+  *((unsigned long*)& __m256i_result[2]) = 0x86ffffffffff9eff;
+  *((unsigned long*)& __m256i_result[1]) = 0x86ff76ffff4effff;
+  *((unsigned long*)& __m256i_result[0]) = 0x86ff32ffaeffffa0;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff8910ffff7e01;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff3573ffff8960;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff8910ffff1ca9;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffff5e5ffff8130;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff8910ffff7e01;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff3573ffff8960;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff8910ffff1ca9;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffff5e5ffff8130;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffee0000ff4c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff050000ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fff90000ff78;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffa80000ff31;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffee0000ff4c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff050000ff3c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fff90000ff78;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffa80000ff31;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf800d0d8ffffeecf;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000383fffffdf0d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf800d0d8ffffeecf;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000383fffffdf0d;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf000f000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf000f000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xe800c0d8fffeeece;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff383efffedf0c;
+  *((unsigned long*)& __m256i_result[1]) = 0xe800c0d8fffeeece;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff383efffedf0c;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f7f000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f7f000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100007fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100007fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100007fff;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff810011;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe0000fffe0002;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe0000fffe0002;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffe200000020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fffe00008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffe200000020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fffe00008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[3]) = 0x7575ffff75757595;
+  *((unsigned long*)& __m256i_result[2]) = 0x7575ffff7575f575;
+  *((unsigned long*)& __m256i_result[1]) = 0x7575ffff75757595;
+  *((unsigned long*)& __m256i_result[0]) = 0x7575ffff7575f575;
+  __m256i_out = __lasx_xvsadd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[0]) = 0x7575757575757575;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000f90;
+  __m256i_out = __lasx_xvsadd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_result[3]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_result[2]) = 0x8d8d72728d8d8d8d;
+  *((unsigned long*)& __m256i_result[1]) = 0x8d8d72728d8d7272;
+  *((unsigned long*)& __m256i_result[0]) = 0x8d8d72728d8d8d8d;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800000000000;
+  __m256i_out = __lasx_xvsadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000200000008;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000200000008;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00011ffb0000bee1;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000080;
+  __m256i_out = __lasx_xvsadd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fc00fc00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fc00fc00;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fc00fc00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc003802fc000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fc00fc00;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvsadd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x80fe80ff80fe00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff80ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x80fe80ff80fe00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff80ff;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsadd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x4040404040404040;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000f0f0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000f0f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000f0f0;
+  __m256i_out = __lasx_xvsadd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000008b;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff010000008b;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7efefefe80ffffff;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe97c020010001;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x3838383838383838;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffdfffffe00;
+  *((unsigned long*)& __m256i_result[1]) = 0x3838383838383838;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffdfffffe00;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1400080008000000;
+  __m256i_out = __lasx_xvsadd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000007b00f9007e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000007b00f9007e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000007b00f9007e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000007b00f9007e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000f601f200fc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000f601f200fc;
+  __m256i_out = __lasx_xvsadd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00007fde00007fd4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007fe000007fe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007fde00007fd4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fe000007fe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff7eddffff7ed3;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff7edfffff7edf;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff7eddffff7ed3;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff7edfffff7edf;
+  __m256i_out = __lasx_xvsadd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000001400;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000001400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000003c01ff9;
+  __m256i_out = __lasx_xvsadd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001fffe0005fff9;
+  *((unsigned long*)& __m256i_result[2]) = 0x04f004f204f204f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001fffe0005fff9;
+  *((unsigned long*)& __m256i_result[0]) = 0x04f004f204f204f0;
+  __m256i_out = __lasx_xvsadd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0feff00000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0feff00000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffc0;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000808000008080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000808000008081;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2b2b2b2b1bd68080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2a2ad4d4f2d8807e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2b2b2b2b1bd68080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2a2ad4d4f2d8807e;
+  *((unsigned long*)& __m256i_result[3]) = 0xd4d5d4d5e42a7f80;
+  *((unsigned long*)& __m256i_result[2]) = 0xd5d62b2c0d287f82;
+  *((unsigned long*)& __m256i_result[1]) = 0xd4d5d4d5e42a7f80;
+  *((unsigned long*)& __m256i_result[0]) = 0xd5d62b2c0d287f82;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffec;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffec;
+  *((unsigned long*)& __m256i_result[3]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2475cef801f0ffdd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x419cd5b11c3c5654;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fff3fff3fff4000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000403f3fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_result[3]) = 0x38f7414938f7882f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x38f7414938f78830;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7fdd5ffebe1c9e3;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7fdd5ffebe1c9e3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000002467db99;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000003e143852;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000002467db99;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000003e143852;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffdb982466;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7fdd5ffadcd9191;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffdb982466;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7fdd5ffadcd9191;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff24;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff24;
+  __m256i_out = __lasx_xvssub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvssub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1010100fefefeff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0f8f0e8df676f778;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0020000000200000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffdfffffffdfffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffdfffffffdfffff;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00fd0101;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00fd0101;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00fd0101;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00fd0101;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f0f0f0ef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf0f0f0f0f0f0f0ef;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f0f0f0ef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf0f0f0f0f0f0f0ef;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000180007f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffafaf80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000180007f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffafaf80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000070f07170;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000070f0f0ef;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000070f07170;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000070f0f0ef;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff07b4ffff0707;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000b8070000a787;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff07b4ffff0707;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000b8070000a787;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffb7650000d496;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001800000018000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffb7650000d496;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001800000018000;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fef0000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fef0000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffe8ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffe8ffffffe8;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffe8ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffe8ffffffe8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000420080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000032;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000003c000000032;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000004e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001010800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffefef800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffefef800;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000bdfef907bc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000bdfef907bc;
+  __m256i_out = __lasx_xvssub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010511c54440438;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010511c54440438;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff000003c0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff000003c0;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f0000007f0060;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ffffffffffffff;
+  __m256i_out = __lasx_xvssub_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000030b8;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4393a0a5bc606060;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43b32feea9000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4393a0a5bc606060;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43b32feea9000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256i_op1[2]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256i_op1[0]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x04e8296f3c611818;
+  *((unsigned long*)& __m256i_result[2]) = 0x032eafee29010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x04e8296f3c611818;
+  *((unsigned long*)& __m256i_result[0]) = 0x032eafee29010000;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_result[2]) = 0xff81001dff9d003b;
+  *((unsigned long*)& __m256i_result[1]) = 0xff81001dff9dff9e;
+  *((unsigned long*)& __m256i_result[0]) = 0xff81001dff9d003b;
+  __m256i_out = __lasx_xvssub_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001ff91ff100000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001ff91ff100000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffffff7fff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ff91ff0ffdfe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffff7fff80;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ff91ff0ffdfe;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvssub_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[3]) = 0x40f69fe63c26f4f5;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ff7ffff00000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x40f69fe63c26f4f5;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff7ffff00000007;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvssub_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xff1cff1cff1cff1c;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100002000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff00007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff00007fff;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000f880f87e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000f880f87e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff0000;
+  __m256i_out = __lasx_xvssub_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvssub_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0501030102141923;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffd5020738b43ddb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x010200023b8e4174;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff4ff4e11410b40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000019410000e69a;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf259905a09c23be0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000883a00000f20;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6d3c2d3a89167aeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000501e99b;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000109973de7;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001020f22;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001890b7a39;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000015d050192cb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x028e509508b16ee9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000033ff01020e23;
+  *((unsigned long*)& __m256i_op0[0]) = 0x151196b58fd1114d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001ffaa0000040e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000716800007bb6;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001ffe80001fe9c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000228200001680;
+  *((unsigned long*)& __m256i_result[3]) = 0x000100ab000500a0;
+  *((unsigned long*)& __m256i_result[2]) = 0x000200b800080124;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001011b000200aa;
+  *((unsigned long*)& __m256i_result[0]) = 0x00150118008f0091;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7ffffffffffff1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbffffffffffffeff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7ffffffffffff1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbffffffffffffeff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff6fffefffe005b;
+  *((unsigned long*)& __m256i_result[2]) = 0xffbefffefffe005a;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff6fffefffe005b;
+  *((unsigned long*)& __m256i_result[0]) = 0xffbefffefffe005a;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000000000000;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000017000000080;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffecffffffec;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000f6ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000f6ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000000000;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007f000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff0000;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffefefffffefe;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000017f0000017d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000017f0000017f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000017f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000017f;
+  __m256i_out = __lasx_xvhaddw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001341c4000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001000310000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000007f00340040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000007f000000ff;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000060000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000060000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000060000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000060000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001fff9fff8;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001fff9fff8;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffff81ffffeb2f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f6ee0570b4e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000018de;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffb4ffcec0f1;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffff81ffffeb2f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f6ee0570b4e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000018de;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffb4ffcec0f1;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001ffffeab0;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000e0574abc;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000018de;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001ffcec0a5;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffe367cc82f8989a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4f90000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffc3aaa8d58f43c8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000082f8989a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000d58f43c8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000002a5;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000002a5;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fffffffefffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xff7fffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fffffffefffe;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000170017;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffefffffffe;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004411;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffc000400780087;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fe80fffc0183;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffc000400f8ff87;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80ff00ff7c0183;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff900000800;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffc00000078;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffc000000f8;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff790000077c;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff0000000000000;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000023;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000236200005111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000175e0000490d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000236200005111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000175e0000490d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000002362;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000175d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000002362;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000175d;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000180007f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffafaf80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000180007f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffafaf80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe01ae00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff010000ff017e;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01ae00ff00ff;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007ff000000000;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0102;
+  *((unsigned long*)& __m256i_result[2]) = 0x007c000000810081;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0102;
+  *((unsigned long*)& __m256i_result[0]) = 0x007c000000810081;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0e0e0e0e0e0e0e0e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000e0e0e0e0e0e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff8fff9000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff8fff9000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff8fff9000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00010e0d00009e0e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00009000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000e0e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00009000;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000003f;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x9090909090909090;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000f;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff1fffffff1;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffc0003fffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffc0003fffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007fc0083fc7c007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007fc0083fc7c007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f010700c70106;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f010700c70106;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000e0010000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000e0010000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000ff;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffefef800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffefef800;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xf3f3f3f3f3f3f4f3;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xf3f3f3f3f3f3f4f3;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000000010000;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000400010004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000400010004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000400010004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000400010004;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff8fffffff8ffff;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000800080008000;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000ff00;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000001ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000001ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffffffffffff;
+  __m256i_out = __lasx_xvhaddw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhaddw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000001ce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000001ce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000001fd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001fd;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100003ffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100003fcd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100003ffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100003fcd;
+  __m256i_out = __lasx_xvhaddw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfa15fa15fa15fa14;
+  __m256i_out = __lasx_xvhaddw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000300000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000300000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000300000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000300000004;
+  __m256i_out = __lasx_xvhaddw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00800080ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00800080ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007fe268;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007fe268;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff70ff01ff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff70ff01ff80;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvhsubw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000fffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000fffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffbfffc;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff00fffffff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffff00;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6300000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6300000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvhsubw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffb7ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffb5ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0048007f002f0028;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x004a007f002f0028;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00b7003600120000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00b7006200fc0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00b7004100190004;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdb801b6d0962003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdb8a3109fe0f0024;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9a7f997fff01ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbe632a4f1c3c5653;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffe54affffffd3;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffcfae000000d8;
+  *((unsigned long*)& __m256i_result[1]) = 0x00006681000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffd668ffffa9c6;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000055;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000055;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff01010101;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00d6acd7;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff01010101;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00d6acd7;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x120e120dedf1edf2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x120e120dedf1edf2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000120e120d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000120e120d;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffb80000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffb80000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_result[2]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_result[1]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_result[0]) = 0x00c200c200c200bb;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000001;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007efeff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007efeff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffd017d00;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff00000bff00000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff00000bff00000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffbff1ffffbff1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffbff1ffffbff1;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffe;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x24342434ffff2435;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x24342434ffff2435;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff1fffffff1;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffeeffaf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffeeffaf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000051;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000101000000fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000051;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000101000000fff;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000012;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff8180ffff8181;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff8180ffff8181;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000f0f0003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000f1003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000f0001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000011;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffc0c0ffffbfc0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffc0c0ffffbfc0;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000f90;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff70;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff70;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffba8300004fc2;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffeffff10000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffeffff10000000;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000dfffffff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000cfffffff3;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000dfffffff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000cfffffff3;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000000d;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffc001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000c000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffc001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000c000;
+  __m256i_out = __lasx_xvhsubw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff00bb;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ff0057;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff00bb;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000ff0057;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fffa003e;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffb009c;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fffa003e;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffb009c;
+  __m256i_out = __lasx_xvhsubw_hu_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0007a861;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0007a861;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffe00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffe00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffe00;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffe00;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff8001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00feff0100feff01;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00feff0100feff01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvhsubw_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe000ffffffff08;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe000ffffffff08;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001fffffff9;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9ffffd8020010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff9fffffff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9ffffd8020010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff9fffffff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x40f69fe73c26f4ee;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000018ffff2b13;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000018ffff2b13;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvhsubw_wu_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_du_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100002000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvhsubw_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvhsubw_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fff00003fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffebffffffebfff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffebffffffebfff;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff7eddffff7ed3;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff7edfffff7edf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff7eddffff7ed3;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff7edfffff7edf;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff3eedffff3ee3;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff3eedffff3ee3;
+  __m256i_out = __lasx_xvhsubw_qu_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op0[2]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op0[1]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op0[0]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffeffffff88;
+  *((unsigned long*)& __m256i_op1[2]) = 0x61e0000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffeffffff88;
+  *((unsigned long*)& __m256i_op1[0]) = 0x61e0000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010ffc80010ff52;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff1ffca0011ffcb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010ffc80010ff52;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff1ffca0011ffcb;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffe90ffffff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffe90ffffff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff90ffffff80;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000005;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffefffefffefffe;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000023;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01c601c6fe3afe3a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01c601c6fe3afe3a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000011;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op0[1]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x007d003e007d003e;
+  *((unsigned long*)& __m256i_result[2]) = 0x007d003effa80010;
+  *((unsigned long*)& __m256i_result[1]) = 0x007d003e007d003e;
+  *((unsigned long*)& __m256i_result[0]) = 0x007d003effa80010;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0c6a240000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0c6a240000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ca0000fff80000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ca0000fff80000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5464fbfc416b9f71;
+  *((unsigned long*)& __m256i_op0[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d8264202b8ea3f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x80c72fcd40fb3bc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x84bd087966d4ace0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x26aa68b274dc1322;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe072db2bb9d4cd40;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffcd42ffffecc0;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000475ffff4c51;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000740dffffad17;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f4bffff7130;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f3280000dfff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000468600008078;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffff328ffffe021;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op0[2]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op0[1]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op0[0]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[3]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[2]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[1]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[0]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000399400003994;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000616100004f61;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000b8f81b8c840e4;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000b8f81b8c840e4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000504f00002361;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff8f81000040e4;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000504f00002361;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff8f81000040e4;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000012;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x40b240b330313031;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff5d425d42;
+  *((unsigned long*)& __m256i_op1[1]) = 0x40b240b330313031;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff5d425d42;
+  *((unsigned long*)& __m256i_result[3]) = 0x000040b200002fd4;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff0000739c;
+  *((unsigned long*)& __m256i_result[1]) = 0x000040b200002fd4;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff0000739c;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007fde00007fd4;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fe000007fe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007fde00007fd4;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fe000007fe0;
+  __m256i_out = __lasx_xvaddwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002e2100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000012e2110;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000583800;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000583800;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000100000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007bbbbbbb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007bbbbbbb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000073333333;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000073333333;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007f807f007e8080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f807f007e806f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007f807f007e8080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f807f007e806f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000007e8080;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007e8092;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000007e8080;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007e8092;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000062d4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000006338;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff80000000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfe01fe01fc01fc01;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe01fe01fc01fc01;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000003fc03bbc;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1b9763952fc4c101;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe37affb42fc05f69;
+  *((unsigned long*)& __m256i_op1[1]) = 0x18b988e64facb558;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe5fb66c81da8e5bb;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xe37affb42fc05f69;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x65fb66c81da8e5ba;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1010101010101012;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1010101010101012;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1010101010101093;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1111111111111113;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101110101011;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1111111211111112;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5980000000000000;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffefe00000000;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000002800000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000002800000010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff0127000c0010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff012700040010;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8001000180010000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8001000180010000;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800200000002;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000020000000200;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe97c020010001;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000001e001e001e0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000001e001e001e0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9240000000008025;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffff24affff8025;
+  *((unsigned long*)& __m256i_op0[1]) = 0xb2c0000000008006;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffb341ffff8006;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9240000000008025;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffff24affff8025;
+  *((unsigned long*)& __m256i_op1[1]) = 0xb2c0000000008006;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffb341ffff8006;
+  *((unsigned long*)& __m256i_result[3]) = 0xff2400000000ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffeffe4fffeff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xff6400000000ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffeff66fffeff00;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff04ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff04ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffefffefffefffe;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe0000fffe0002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe0000fffe0002;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000fffeffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000fffeffff;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffc0003fffc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffc0003fffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7ffeffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7ffeffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f007bfffffffb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f007bfffffffb;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000201220001011c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe0ffe000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fa0001fff808000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe0ffe000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fa0001fff808000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f0000ffffff80;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f0000ffffff80;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000400040004;
+  __m256i_out = __lasx_xvaddwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007ff000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001fe;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000d24;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000d24;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4ffc3f7800000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3fc03f6400000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4ffc3f7800000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3fc03f6400000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000050fd00000101;
+  *((unsigned long*)& __m256i_result[2]) = 0x000040c100000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x000050fd00000101;
+  *((unsigned long*)& __m256i_result[0]) = 0x000040c100000101;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007fff;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000400008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000400008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000800080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc9d8080067f50020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc70000020000c000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffe06df0d7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffbe8b470f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007ffffffff7ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x49d8080067f4f81f;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffebeb8;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffebeb8;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1111111111111111;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffeffffffdd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d6d6d;
+  __m256i_out = __lasx_xvaddwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffb10001ff8f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001004c0001ff87;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffb10001ff8f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001004c0001ff87;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff7;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff7;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ff02ff80fede;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ff02ff80fede;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fffe00800022;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffe00800022;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fff0ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff0ffc0;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0000;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffe4ffffffe4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001d0000001c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001d0000001c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001d0000001c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001d0000001c;
+  __m256i_out = __lasx_xvsubwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffeff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffeff00000000;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010203;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffefefffffefe;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000600000006;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000fffffffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000008080809;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000008080809;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000008080809;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000008080809;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffd;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffffd;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffd;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffd;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff1cff18;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff1cff18;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000001400;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffec00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffc3fe007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffc3fe007;
+  __m256i_out = __lasx_xvsubwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00010000;
+  __m256i_out = __lasx_xvsubwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010100000102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010100000102;
+  __m256i_out = __lasx_xvsubwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007fffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007fffff007fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007fffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007fffff007fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffbdff3cffbdff44;
+  __m256i_out = __lasx_xvsubwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe4ffe6ffe5ffe6;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe4ffe6ffe5ffe6;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe4ffe6ffe5ffe6;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe4ffe6ffe5ffe6;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x017e01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0586060601fe0202;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e01fe01fe0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0586060601fe0004;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffbfffafffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffbfffaffff0000;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_result[1]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffefffefffefffef;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffff6;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100007fff;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000043efffff8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000043efffff8000;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f60041f636003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000003f00001f63;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000003f00001f63;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff80ff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff80ff;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc3030000ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc3030000ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003cfc0000006f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003cfc0000006f;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffb2f600006f48;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000001fffe;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000060000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000060000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000020202020;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fff8ff40;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff0100090040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff02;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff02;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffe00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffe00;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffe00;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffe00;
+  __m256i_out = __lasx_xvsubwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1e17ffffd0fc6772;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1e17ffffebf6ded2;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1e17ffffd0fc6772;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1e17ffffebf6ded2;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xe1e800002f03988d;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xe1e800002f03988d;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x6300000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x6300000000000001;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000808;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f7f7f7f7f8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff1fff1fff1fff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000e000e000e000e;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0a0a000000000a0a;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010000000000000;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffc0003fffa0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fb010201f900ff;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000005554;
+  *((unsigned long*)& __m256i_op1[2]) = 0xaaaa0000aaacfffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000005554;
+  *((unsigned long*)& __m256i_op1[0]) = 0xaaaa0000aaacfffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_result[2]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000054;
+  *((unsigned long*)& __m256i_result[0]) = 0x00aa000000ac00fe;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x017f01fe01ff01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x05960616020e0203;
+  *((unsigned long*)& __m256i_op0[1]) = 0x017f01fe01ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x05960616020e0005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x017f01fe01ff01fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x05960616020e0203;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017f01fe01ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x05960616020e0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x00fe01fc01fe01fc;
+  *((unsigned long*)& __m256i_result[2]) = 0x012c002c001c0006;
+  *((unsigned long*)& __m256i_result[1]) = 0x00fe01fc01fe0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x012c002c001c000a;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op0[1]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0007000000fb00ef;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ea005600f90090;
+  *((unsigned long*)& __m256i_result[1]) = 0x0007000000fb00ef;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ea005600f90090;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffc03b1fc5e050;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6a9e3fa2603a2000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffc03b1fc5e050;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6a9e3fa2603a2000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc03fffffffc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffc00000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc03fffffffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffc00000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_result[2]) = 0x019d00a2003a0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_result[0]) = 0x019d00a2003a0000;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00e30064001a008f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00e3006300e30063;
+  *((unsigned long*)& __m256i_result[1]) = 0x00e30064001a008f;
+  *((unsigned long*)& __m256i_result[0]) = 0x00e3006300e30063;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000013;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000a400ff004f;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002ffff00020002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x04f504f104f504f5;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002ffff00020002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x04f504f104f504f5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000200ff00020002;
+  *((unsigned long*)& __m256i_result[2]) = 0x00f500f100f500f5;
+  *((unsigned long*)& __m256i_result[1]) = 0x000200ff00020002;
+  *((unsigned long*)& __m256i_result[0]) = 0x00f500f100f500f5;
+  __m256i_out = __lasx_xvaddwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000019410000e69a;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf259905a0c126604;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000883a00000f20;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6d3c2d3aa1c82947;
+  *((unsigned long*)& __m256i_op1[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc5c085372cfabfba;
+  *((unsigned long*)& __m256i_op1[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0658f2dc0eb21e3c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000b6b60001979a;
+  *((unsigned long*)& __m256i_result[2]) = 0x00011591000125be;
+  *((unsigned long*)& __m256i_result[1]) = 0x000093950000a915;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001201600004783;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffff6ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffff6ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff000000ff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ff000000ff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff000000ff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffee0000ff4c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff050000ff3c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fff90000ff78;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffa80000ff31;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffc7f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffc000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffc7f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffc000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8001b0b1b4b5dd9f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000b0b100015d1e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001fffe0001bfff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000b0b100015d1e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001fffe0001bfff;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fe200000fe1f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fe200000fe1f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffc0ffc1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f00000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffc0ffc1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f00000000003f;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001fffe0001ffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0001003e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001fffe0001ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0001003e;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0020010101610000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0061200000610000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0020010101610000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0061200000610000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000101000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00011fff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000101000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00011fff0000ffff;
+  __m256i_out = __lasx_xvaddwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvaddwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000013ffffffec;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000013ffffebd8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000013ffffffec;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000013ffffebd8;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffec;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffebd8;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffec;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffebd8;
+  __m256i_out = __lasx_xvaddwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000c0007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000c0007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3abb3abbbabababa;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0080000000800080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3abb3abbbabababa;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0080000000800080;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000babababa;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000008c0087;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000babababa;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000008c0087;
+  __m256i_out = __lasx_xvaddwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvaddwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000000a;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8060000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8060000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x805f0000ffffffff;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe01fe010000fd02;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe01fe010000fd02;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfe01fe010000fd02;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe01fe010000fd02;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f807f80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f807f80;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff1cff1cff1cff1c;
+  __m256i_out = __lasx_xvaddwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe00007f000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff017e01fe;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00b7003600120000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00b7006200fc0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000fe00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00b7004100190004;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000008e7c00;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000067751500;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000008e7c00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000067751500;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000007a00f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff01640092;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000007a00f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff01640092;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff000000ff0000;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff00ff;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff008000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff008000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff008000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff008000000000;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff008000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff008000ff0000;
+  __m256i_out = __lasx_xvaddwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000804000004141;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00017fff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000004444;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007bbb0000f777;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000004444;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007bbb0000f777;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4010000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3e6ce7d9cb7afb62;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4010000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3e6ce7d9cb7afb62;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003e6c0000cb7a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000401000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003e6c0000cb7a;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3aadec4f6c7975b1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3abac5447fffca89;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3aadec4f6c7975b1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3abac5447fffca89;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3aadec4f6c7975b1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3abac5447fffca89;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3aadec4f6c7975b1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3abac5447fffca89;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000755a0000d8f2;
+  *((unsigned long*)& __m256i_result[2]) = 0x000075740000fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000755a0000d8f2;
+  *((unsigned long*)& __m256i_result[0]) = 0x000075740000fffe;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffee00ba;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffee00ba;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffee;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffee;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9ffffd8020010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff9fffffff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9ffffd8020010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff9fffffff9;
+  *((unsigned long*)& __m256i_result[3]) = 0x00009fff00002001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00009fff00002001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvaddwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000001a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000001a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100010000;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00153f1594ea02ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffff0100;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff15c1ea95ea02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000153f15;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff15c1ea;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100fe04ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100fe04ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff00ff;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00f9f9f900000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00f9f9f900000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000faf3f3f2;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000029170;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000029170;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff7fffffff;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc3f0c3f0c3f0c3f0;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe7e7e7e7e7e7e7e7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xe6e8e6e8e6e8d719;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xe6e8e6e8e6e8d719;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000007f0000007f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000007f0000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff80ff01ff80;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff800000007e;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0043030300400300;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0043030300400300;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0043030300400100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0043030300400100;
+  *((unsigned long*)& __m256i_result[3]) = 0xffdd001dffe00020;
+  *((unsigned long*)& __m256i_result[2]) = 0xffdd001dffe00031;
+  *((unsigned long*)& __m256i_result[1]) = 0xffdd001dffe00020;
+  *((unsigned long*)& __m256i_result[0]) = 0xffdd001dffe00031;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000001ffe2000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x001fe020001fe020;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff0020ff1f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe1ffe0ffe1ffe0;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffee00ba;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffee00ba;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00fffff500ba;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00fffff500ba;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000047000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000047000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff01;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff01ff01ff01;
+  __m256i_out = __lasx_xvsubwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fffc0000fffc;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffc0000fffc;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001504f4c4b2361;
+  *((unsigned long*)& __m256i_op0[2]) = 0x303338a48f374969;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001504f4c4b2361;
+  *((unsigned long*)& __m256i_op0[0]) = 0x303338a48f374969;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff47b4ffff5879;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000504fffff3271;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff47b4ffff5879;
+  __m256i_out = __lasx_xvsubwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvsubwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffbf4;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000308;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010100000102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010100000102;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffefd;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80000000fff0e400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000f1a40;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000003effe1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000003effe1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000003effe1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000003effe1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffff7;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffff7;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff0002;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0002;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff0002;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff0002;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0000;
+  __m256i_out = __lasx_xvsubwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5d20a0a15d20a0a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5d20a0a15d20a0a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000000001;
+  __m256i_out = __lasx_xvsubwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffeffffff00;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000100;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000100;
+  __m256i_out = __lasx_xvsubwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsubwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff01ff01ff01;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000020001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffcc8000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007dfdff4b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff3400000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff83ff01;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ff010000ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff010000ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ff010000ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff010000ff01;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff0fff0fff0fff0;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_result[3]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256i_result[2]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256i_result[1]) = 0xff21ff21ff21ff21;
+  *((unsigned long*)& __m256i_result[0]) = 0xff21ff21ff21ff21;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4079808280057efe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007ffcfcfd020202;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x004000800080007e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000fc00fd0002;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff0100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff0100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff0100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff0100000000;
+  __m256i_out = __lasx_xvsubwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe00007f000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff7fff00007f00;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000100007fff;
+  __m256i_out = __lasx_xvsubwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000b8f81b8c840e4;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000b8f81b8c840e4;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffb3b4;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff5ffff4738;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffb3b4;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff5ffff4738;
+  __m256i_out = __lasx_xvsubwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00009fff9ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff20010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00009fff9ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff20010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00002080df5b41cf;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00002080df5b41cf;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000009fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff40a6;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000009fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff40a6;
+  __m256i_out = __lasx_xvsubwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00007fffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00007fffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff8001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff8001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000001;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x020afefb08140000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff00ffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff0001ff02;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff020afefc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000003fefd;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1514151415141514;
+  *((unsigned long*)& __m256i_op1[2]) = 0x151415141514e335;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1514151415141514;
+  *((unsigned long*)& __m256i_op1[0]) = 0x151415141514e335;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000e9ece9ec;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000e9ece9ec;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000c005e000c0029;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0004005600040020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000060008;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000c005b;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffe0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000040053;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f8f7f8f800f800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f784000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f8f7f84000fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f784000ff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f7f8f7f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000003f78;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f7f8f7f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000003f78;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000070007000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff8fff9000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff8fff9000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff8fff9000;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff37b737b8;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff77b737b8;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff37b737b8;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff77b737b8;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op0[1]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff457db03f;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvsubwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000001;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffafafb3b3dc9d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffafafb3b3dc9d;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000008050501;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000029170;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000029170;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001fff000;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000090b0906;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsubwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[3]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_result[2]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_result[1]) = 0x0036003200360032;
+  *((unsigned long*)& __m256i_result[0]) = 0x0036003200360032;
+  __m256i_out = __lasx_xvaddwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000170017;
+  __m256i_out = __lasx_xvaddwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000100fe000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000100fe00010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x000100fe000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000100fe00010001;
+  __m256i_out = __lasx_xvaddwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ff00007fff9fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0209fefb08140000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000bf6e0000c916;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000030000fff3;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000b004a00440040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8020004a0011002a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000b004a00440040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8020004a0011002a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000004a00000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004a0000002a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000004a00000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004a0000002a;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001fff00001fff;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001a001a001a009a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001a001a002a009a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001a001a001a009a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001a001a002a009a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001a000000da;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001a000000da;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001a000000da;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001a000000da;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000007ffffffce;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000001ce;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001ce;
+  __m256i_out = __lasx_xvaddwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff0000;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8011ffae800c000c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00baff050083ff3c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80b900b980380038;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0017ffa8008eff31;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff800c000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000084ff3c;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff80380038;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000008fff31;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001001f001e;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001001f001e;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff00ff;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100f000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100f000ff;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fff0ffc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff0ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff78ffc0;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000016e00;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffff1cff1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff1cff1c;
+  __m256i_out = __lasx_xvaddwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000033e87ef1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002e2100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000033007e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000021;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020002000400040;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007fc0083fc7c007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007fc0083fc7c007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffc0003fffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffc0003fffc0;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffdbbbcf;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffb8579f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffdbbbcf;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffb8579f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff00bb;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0057;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff00bb;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff0057;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff00ff;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000005060503;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000073737;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000050007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000039;
+  __m256i_out = __lasx_xvaddwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f3280000dfff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000007070707;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0102040000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000020100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0703020000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000707;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000070300000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007fffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007fffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000100640000ff92;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000100640000ff92;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007c0100007c01;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007c0100007c00;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007c0100007c01;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007c0100007c00;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffe0000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000048;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000007d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000048;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000800000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000800000010;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fffe00009fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fffe00002001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffe00009fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fffe00002001;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010080;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffffffe;
+  __m256i_out = __lasx_xvaddwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007f00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7ffe7fffeffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffd84900000849;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07fffc670800f086;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x311d9b643ec1fe01;
+  *((unsigned long*)& __m256i_op1[0]) = 0x344ade20fe00fd01;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007f00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x311d73ad3ec2064a;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff80cbfffffdf8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000081500000104;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffa4fffffffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000700000002;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff80cbfffffdf8;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffa4fffffffd;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000008050501;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_op0[2]) = 0x019d00a20039fff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_op0[0]) = 0x019d00a2003a0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe007a01c40110;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001ffffe00200000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001ffffe00200000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0020001d001f;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000fef0ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000fef0ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000400080ffc080;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007f010000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007f010000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvaddwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x61d849f0c0794ced;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe75278c187b20039;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf90c0c0c00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0ca40c0c0c0c0cc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0c0c0c0c0cb60cc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfbe0b80c960c96d0;
+  *((unsigned long*)& __m256i_result[3]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_result[2]) = 0x146014141414146e;
+  *((unsigned long*)& __m256i_result[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256i_result[0]) = 0xf19998668e5f4b84;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00d6c1c830160048;
+  *((unsigned long*)& __m256i_op1[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe3aebaf4df958004;
+  *((unsigned long*)& __m256i_result[3]) = 0xc58a0a0a07070706;
+  *((unsigned long*)& __m256i_result[2]) = 0x006b60e4180b0023;
+  *((unsigned long*)& __m256i_result[1]) = 0x1b39153f334b966a;
+  *((unsigned long*)& __m256i_result[0]) = 0xf1d75d79efcac002;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000007f00000022;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000007f00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000003f00000011;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000003f00000000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fff3fff3fff3fff;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff7fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff7fffffff;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a09080706050403;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a09080706050403;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0504840303028201;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0504840303028201;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff00fff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007f7f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f007f78;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000800080008000;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000001f;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0080808080808080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0080808080808080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080808100808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0080808000808080;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000100da000100fd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001ffe20001fefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001009a000100fd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001ff640001fefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000edff00fffd;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fff10000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000cdff00fffd;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff320000ffff;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffe00f7ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffff629d7;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffe00f7ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffff629d7;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffbfffafffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffbfffaffff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe01fc01fe01fc;
+  *((unsigned long*)& __m256i_op1[2]) = 0x012c002c001c0006;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe01fc01fe0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x012c002c001c000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x807e80fd80fe80fd;
+  *((unsigned long*)& __m256i_result[2]) = 0x80938013800d8002;
+  *((unsigned long*)& __m256i_result[1]) = 0x807e80fd80fe0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x80938013800d0005;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00010002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00010002;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0888888888888888;
+  *((unsigned long*)& __m256i_result[2]) = 0x0888888888888888;
+  *((unsigned long*)& __m256i_result[1]) = 0x0888888888888888;
+  *((unsigned long*)& __m256i_result[0]) = 0x0888888888888888;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000004444;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007bbb0000f777;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000004444;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007bbb0000f777;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003ddd80007bbb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003ddd80007bbb;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008000800080008;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4010000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3e6ce7d9cb7afb62;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4010000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3e6ce7d9cb7afb62;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2008000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1f3673ece5bd7db1;
+  *((unsigned long*)& __m256i_result[1]) = 0x2008000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1f3673ece5bd7db1;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000800080008000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000005000000020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000005000000020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002800000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002800000010;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xd010101010101010;
+  *((unsigned long*)& __m256i_result[2]) = 0xd010101010103218;
+  *((unsigned long*)& __m256i_result[1]) = 0xd010101010101010;
+  *((unsigned long*)& __m256i_result[0]) = 0xd010101010103218;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fefffeff02ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00030006fa05f20e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00030081bd80f90e;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00010003fc827a86;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f7f7f7f0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f017fc0ddbf7d86;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40efffe000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40efffe000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f807f007f7f817f;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x111ebb784f9c4100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1c386546809f3b50;
+  *((unsigned long*)& __m256i_op1[1]) = 0x111ebb784f9bf1ac;
+  *((unsigned long*)& __m256i_op1[0]) = 0x21f6050d955d3f68;
+  *((unsigned long*)& __m256i_result[3]) = 0x088f5dbc27ce2080;
+  *((unsigned long*)& __m256i_result[2]) = 0x161c32a2c04f9da7;
+  *((unsigned long*)& __m256i_result[1]) = 0x088f5dbc27cdf8d6;
+  *((unsigned long*)& __m256i_result[0]) = 0x10fb02864aae9fb4;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010101010101013;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010101010101013;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010101010;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fff7fffffc08008;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fff7fffffc08008;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff00007fff;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe0000fffe0012;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x800000007fff0001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80000000ff7f0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x800000007fff0001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80000000ff7f0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x800000007fff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x80000000ff7f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x800000007fff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x80000000ff7f0000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff0000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffff00000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffff00000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000004;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_result[2]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_result[0]) = 0x7e00000000000000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffe00800022;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffe00800022;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007fff00400011;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000008001ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007fff00400011;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000800200028;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000800200027;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400100013;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000400100014;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000400100013;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000004;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000006170;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000006170;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000030b8;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202010202020102;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000101010001;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000400000003fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000400000003fff;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040404000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040404000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000020202000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000020202000;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffc01fc01;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000003fc03bbc;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffe00fe00;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000001fe01dde;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffe00fe00;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000001fe01dde;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000405;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000405;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000202;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000f0f0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007878;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007878;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7f7f7f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x007f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7f7f7f00000000;
+  __m256i_out = __lasx_xvavg_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000080040;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001e00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000f00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavg_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavg_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavg_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvavg_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000500040005;
+  __m256i_out = __lasx_xvavg_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc5c085372cfabfba;
+  *((unsigned long*)& __m256i_op0[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0658f2dc0eb21e3c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000501e99b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000109973de7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001020f22;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001890b7a39;
+  *((unsigned long*)& __m256i_result[3]) = 0x1b974ebaf6d64d4e;
+  *((unsigned long*)& __m256i_result[2]) = 0x62e0429c1b48fed1;
+  *((unsigned long*)& __m256i_result[1]) = 0x18b985adf63f548c;
+  *((unsigned long*)& __m256i_result[0]) = 0x032c796ecbdecc3b;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100020001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00d6c1c830160048;
+  *((unsigned long*)& __m256i_op1[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe3aebaf4df958004;
+  *((unsigned long*)& __m256i_result[3]) = 0xc5890a0a07070707;
+  *((unsigned long*)& __m256i_result[2]) = 0x006be0e4180b8024;
+  *((unsigned long*)& __m256i_result[1]) = 0x1b399540334c966c;
+  *((unsigned long*)& __m256i_result[0]) = 0x71d7dd7aefcac001;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xdff8000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xdff8000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xdff8000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xdff8000000000000;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000808081;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000808081;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000808081;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000808081;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000f18080010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000f18080010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000078c0c0008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000078c0c0008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x8080808080808080;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000081;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000808080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff7f0000ff7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff7f0000ff7f;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_op0[2]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5555555536aaaaac;
+  *((unsigned long*)& __m256i_op0[0]) = 0x55555555aaaaaaac;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x2b2b2b2b1bd5d5d6;
+  *((unsigned long*)& __m256i_result[2]) = 0x2a2a2a2af2d5d5d6;
+  *((unsigned long*)& __m256i_result[1]) = 0x2b2b2b2b1bd5d5d6;
+  *((unsigned long*)& __m256i_result[0]) = 0x2a2a2a2af2d5d5d6;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1086658a18ba3594;
+  *((unsigned long*)& __m256i_op1[2]) = 0x160fe9f000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1086658a18ba3594;
+  *((unsigned long*)& __m256i_op1[0]) = 0x160fe9f000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x07a232640bfc1a73;
+  *((unsigned long*)& __m256i_result[2]) = 0x0a66f497ff9effa9;
+  *((unsigned long*)& __m256i_result[1]) = 0x07a232640bfc1a73;
+  *((unsigned long*)& __m256i_result[0]) = 0x0a66f497ff9effa9;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3a2a3a2a3a2a3a2a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3a2a3a2a3aaa45aa;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3a553f7f7a2a3a2a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3a2a3a2a3aaa45aa;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x1d949d949d949d95;
+  *((unsigned long*)& __m256i_result[2]) = 0x1d949d949e1423d4;
+  *((unsigned long*)& __m256i_result[1]) = 0x1de9a03f3dd41d95;
+  *((unsigned long*)& __m256i_result[0]) = 0x1d949d949e1423d4;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000c0;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000083f95466;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010100005400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001e001ea1bfa1bf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x001e001e83e5422e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001e001ea1bfa1bf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x011f011f0244420e;
+  *((unsigned long*)& __m256i_result[3]) = 0x000f000fd0dfd0df;
+  *((unsigned long*)& __m256i_result[2]) = 0x000f000f83ef4b4a;
+  *((unsigned long*)& __m256i_result[1]) = 0x000f000fd0dfd0df;
+  *((unsigned long*)& __m256i_result[0]) = 0x0110011001224b07;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffefefeff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff295329;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[3]) = 0x007f00f8ff7fff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fff6a9d8;
+  *((unsigned long*)& __m256i_result[1]) = 0x007f00f8ff7fff80;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff6a9d8;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000082a54290;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000028aa700;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000082a54290;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000002a54287;
+  *((unsigned long*)& __m256i_result[3]) = 0x007f00f841532148;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001a753c3;
+  *((unsigned long*)& __m256i_result[1]) = 0x007f00f841532148;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001b52187;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000fd0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000fd0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007f0000;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007ffe7ffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe7ffe7ffe8000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000807e7ffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007ffe7ffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ffe7ffe7ffe8000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000807e7ffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ffe7ffe7ffe7ffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007ffe7ffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ffe7ffe7ffe8000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000807e7ffe;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x8080808080808080;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000004a00000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000004a0000002a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000004a00000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000004a0000002a;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fffffffefffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff7fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fffffffefffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002500000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x00008024ffff8014;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc0002500000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x00008024ffff8014;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000004444;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007bbb0000f777;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000004444;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007bbb0000f777;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003dde00007bbc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003dde00007bbc;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010001000200020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010001000200020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010001000200020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010001000200020;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000030000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000030000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000018002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000018002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000001a00;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[3]) = 0x4ffc3f783fc040c0;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fc03f803fc040c0;
+  *((unsigned long*)& __m256i_result[1]) = 0x4ffc3f783fc040c0;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fc03f803fc040c0;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3870ca8d013e76a0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43ec0a1b2aba7ed0;
+  *((unsigned long*)& __m256i_result[3]) = 0x111ebb784f9c4100;
+  *((unsigned long*)& __m256i_result[2]) = 0x1c386546809f3b50;
+  *((unsigned long*)& __m256i_result[1]) = 0x111ebb784f9bf1ac;
+  *((unsigned long*)& __m256i_result[0]) = 0x21f6050d955d3f68;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0505070804040404;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0504070804040404;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0505070804040404;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0504070804040404;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0283038402020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0282038402020202;
+  *((unsigned long*)& __m256i_result[1]) = 0x0283038402020202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0282038402020202;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000023a20000a121;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000179e0000951d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000023a20000a121;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000179e0000951d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000125100005111;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000c4f00004b0f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000125100005111;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000c4f00004b0f;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000080008001;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x007f8080007f007f;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f8080007f007f;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1010101010001000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x101010100000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0889088908810881;
+  *((unsigned long*)& __m256i_result[2]) = 0x0081010000810100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0889088900810088;
+  *((unsigned long*)& __m256i_result[0]) = 0x0081010000810100;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff8000;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000001d001d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3e00000440004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3e000004400f400f;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100000001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3abb3abbbabababa;
+  *((unsigned long*)& __m256i_result[2]) = 0x0080000000800080;
+  *((unsigned long*)& __m256i_result[1]) = 0x3abb3abbbabababa;
+  *((unsigned long*)& __m256i_result[0]) = 0x0080000000800080;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_result[2]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_result[1]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_result[0]) = 0xdfc2df80df80df87;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbf00bf00bf00bf00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbf84bf00bf00bf0e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0xdf80ff20df80ff20;
+  *((unsigned long*)& __m256i_result[2]) = 0xdfc2ff20df80ffa7;
+  *((unsigned long*)& __m256i_result[1]) = 0xdf80ff20df80ff20;
+  *((unsigned long*)& __m256i_result[0]) = 0xdfc2ff20df80ffa7;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000840100000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbffebffec0fe0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000840100000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbffebffec0fe0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5fff5fff607f0000;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256i_result[2]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256i_result[0]) = 0x21f8c3c4c0000005;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_result[2]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_result[1]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_result[0]) = 0x8848c848c848c848;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000457d607d;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff457d607f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000457d607d;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff457d607f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffa2beb040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffa2beb040;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000457d607d;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff457d607f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000457d607d;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff457d607f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffa2beb040;
+  __m256i_out = __lasx_xvavgr_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc0008000c0008000;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_result[2]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_result[0]) = 0x000a800b000a800b;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000018803100188;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000018803100188;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000014402080144;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000086fe0000403e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000403e00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000086fe0000403e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000403e00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000437f0000201f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000201f00002020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000437f0000201f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000201f00002020;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_result[3]) = 0x40f23232330df9c8;
+  *((unsigned long*)& __m256i_result[2]) = 0x40f2323240f23232;
+  *((unsigned long*)& __m256i_result[1]) = 0x40f23232330df9c8;
+  *((unsigned long*)& __m256i_result[0]) = 0x40f2323240f23232;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00f9f9f900000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00f9f9f900000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007cfcfd80000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007cfcfd80000001;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_result[1]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000080c000c080;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100c00000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x90007fff90008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0ffffffe90008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x87ffffff87ffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xc880bfffc880c080;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x87ffffffc880c080;
+  __m256i_out = __lasx_xvavgr_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0ff000000000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000f00f000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0ff000000000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000f00f000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00f8000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x000800f800000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00f8000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x000800f800000000;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000090b0906;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000005060503;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000073737;
+  __m256i_out = __lasx_xvavgr_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff8001ffff8001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff8001ffff8001;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffc0017fffc001;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffc0017fffc001;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fffffff3fffc000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fffffff3fffc000;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000f00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000700000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0080000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000000a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0040000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000005;
+  __m256i_out = __lasx_xvavgr_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000007ffffffce;
+  __m256i_out = __lasx_xvavgr_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000005858585a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000005858585a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000005858585a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000005858585a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000023a300003fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000023a300003fef;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000023a300003fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000023a300003fef;
+  *((unsigned long*)& __m256i_result[3]) = 0x000011d1ac2c4c2d;
+  *((unsigned long*)& __m256i_result[2]) = 0x000011d1ac2c4c25;
+  *((unsigned long*)& __m256i_result[1]) = 0x000011d1ac2c4c2d;
+  *((unsigned long*)& __m256i_result[0]) = 0x000011d1ac2c4c25;
+  __m256i_out = __lasx_xvavgr_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x34598d0fd19314cb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1820939b2280fa86;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4a1c269b8e892a3a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x063f2bb758abc664;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffc0fcffffcf83;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000288a00003c1c;
+  *((unsigned long*)& __m256i_result[3]) = 0x3459730f2f6d1435;
+  *((unsigned long*)& __m256i_result[2]) = 0x19212d61237f2b03;
+  *((unsigned long*)& __m256i_result[1]) = 0x4a1c266572772a3a;
+  *((unsigned long*)& __m256i_result[0]) = 0x063f032d58557648;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0cc08723ff900001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xcc9b89f2f6cef440;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0cc08723006fffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x3364760e09310bc0;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000017f0000017d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000017f0000017f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017f0000017d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000017f0000017f;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0008000001010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000001010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000001010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000001010000;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100010080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100010080;
+  __m256i_out = __lasx_xvabsd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001400000014;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfbba01c0003f7e3f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfbd884e7003f7e3f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff874dc687870000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfbba01c0003f7e3f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_result[1]) = 0xfbd884e7003f7e3f;
+  *((unsigned long*)& __m256i_result[0]) = 0xff874dc687870000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe01f000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe01f000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbf800000bf800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd662fa0000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbf800000bf800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd6ef750000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x417e01f040800000;
+  *((unsigned long*)& __m256i_result[2]) = 0x299d060000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x417e01f040800000;
+  *((unsigned long*)& __m256i_result[0]) = 0x29108b0000000000;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x400040003abf4000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x400040003abf4000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000003fff3fff;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0408040800008003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0408040800008003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fff80800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0408040800008003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x04080408fff87803;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007f017f01;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007f017f01;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000073333333;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000073333333;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000073333333;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000073333333;
+  __m256i_out = __lasx_xvabsd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001700170017;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000000;
+  __m256i_out = __lasx_xvabsd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffa0078fffa0074;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffb79fb74;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffb79fb74;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x000100010485048a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0005ff870005ff86;
+  *((unsigned long*)& __m256i_result[1]) = 0x000100010485048a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0005ff870005ff86;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000100010485048a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0005ff870005ff86;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000100010485048a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0005ff870005ff86;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffa0065fffa0066;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffeffebfb7afb62;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffa0065fffa0066;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00040000;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0606060606060606;
+  *((unsigned long*)& __m256i_result[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[0]) = 0xf9f9f9f9f9f9f9f9;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvabsd_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000050fd00000101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000040c100000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000050fd00000101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000040c100000101;
+  *((unsigned long*)& __m256i_result[3]) = 0x000050fd00000101;
+  *((unsigned long*)& __m256i_result[2]) = 0x000040c100000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x000050fd00000101;
+  *((unsigned long*)& __m256i_result[0]) = 0x000040c100000101;
+  __m256i_out = __lasx_xvabsd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001504f4c4b2361;
+  *((unsigned long*)& __m256i_result[2]) = 0x303338a48f374969;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001504f4c4b2361;
+  *((unsigned long*)& __m256i_result[0]) = 0x303338a48f374969;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0001;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x800000007fff0001;
+  *((unsigned long*)& __m256i_result[2]) = 0x80000000ff7f0001;
+  *((unsigned long*)& __m256i_result[1]) = 0x800000007fff0001;
+  *((unsigned long*)& __m256i_result[0]) = 0x80000000ff7f0001;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x807c7fffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80817fff00810000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x807c7fffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80817fff00810000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x80767f0101050101;
+  *((unsigned long*)& __m256i_result[2]) = 0x80817f01007f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x80767f0101050101;
+  *((unsigned long*)& __m256i_result[0]) = 0x80817f01007f0000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003f3f0000400d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f3f0000400d;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffff88;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffe98;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000064;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvabsd_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01ffff4300fffeff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfe0000bcff000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01ffff4300fffeff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfe0000bcff000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x81ff00bd80ff0101;
+  *((unsigned long*)& __m256i_result[2]) = 0x01ff00bd00ff0101;
+  *((unsigned long*)& __m256i_result[1]) = 0x81ff00bd80ff0101;
+  *((unsigned long*)& __m256i_result[0]) = 0x01ff00bd00ff0101;
+  __m256i_out = __lasx_xvabsd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff003f003f00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff0101fd00010100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff003f003f00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff0101fd00010100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00ff003f003f00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff0101fd00010100;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00ff003f003f00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff0101fd00010100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01fe01fe01fe;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x437fe01fe01fe020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x437fe01fe01fe020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x037fe01f001fe020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x037fe01f001fe020;
+  *((unsigned long*)& __m256i_result[3]) = 0x437f201f201f2020;
+  *((unsigned long*)& __m256i_result[2]) = 0x037f201f001f2020;
+  *((unsigned long*)& __m256i_result[1]) = 0x437f201f201f2020;
+  *((unsigned long*)& __m256i_result[0]) = 0x037f201f001f2020;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1f60010000080100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1f60010000080100;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffff5fff7;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffff5fff7;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010000080040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000080040;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001010000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f80ffffff808000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f80ffffff808000;
+  *((unsigned long*)& __m256i_result[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f0000007f7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f0000007f7fff;
+  __m256i_out = __lasx_xvabsd_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffffd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffe;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000400100004001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000400100004001;
+  __m256i_out = __lasx_xvabsd_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003fef00003fea;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003ff000003ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003fea00013feb;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fe900014022;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003fea00013feb;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fe900014022;
+  __m256i_out = __lasx_xvabsd_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvabsd_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000010100020103;
+  *((unsigned long*)& __m256i_result[2]) = 0x040f040f040b236d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000010100020103;
+  *((unsigned long*)& __m256i_result[0]) = 0x040f040f040b236d;
+  __m256i_out = __lasx_xvabsd_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101008000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101008000000080;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000402000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000402000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000402000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000402000000;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffeffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100010102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xefefefefefefefef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xefefefefefefefef;
+  *((unsigned long*)& __m256i_op0[1]) = 0xefefefefefefef6e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xeeeeeeeeeeeeeeee;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010101010101012;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010101010101012;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010101010101093;
+  *((unsigned long*)& __m256i_result[0]) = 0x1111111111111113;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0110000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0110000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0110000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0110000000000080;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1000000000000000;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1fe01e0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xce7ffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xce7ffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x327f010101010102;
+  *((unsigned long*)& __m256i_result[2]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x327f010101010102;
+  *((unsigned long*)& __m256i_result[0]) = 0x6300000000000000;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff5556aaaa;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff5556aaaa;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00020000aaa95556;
+  *((unsigned long*)& __m256i_result[1]) = 0x0006ffff0004ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00020000aaa95556;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdb801b6d0962003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdb8a3109fe0f0024;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9a7f997fff01ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbe632a4f1c3c5653;
+  *((unsigned long*)& __m256i_result[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_result[2]) = 0x2475cef801f0ffdd;
+  *((unsigned long*)& __m256i_result[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_result[0]) = 0x419cd5b11c3c5654;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x017e01fe01fe01fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0586060601fe0202;
+  *((unsigned long*)& __m256i_op0[1]) = 0x017e01fe01fe0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0586060601fe0004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010001000100001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010001000100001;
+  *((unsigned long*)& __m256i_result[3]) = 0x017f01fe01ff01fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x05960616020e0203;
+  *((unsigned long*)& __m256i_result[1]) = 0x017f01fe01ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x05960616020e0005;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010146;
+  *((unsigned long*)& __m256i_result[2]) = 0x01010101010e0106;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010146;
+  *((unsigned long*)& __m256i_result[0]) = 0x01010101010e0106;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010000000100000;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffb79fb74;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffb79fb74;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000010486048c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000010486048c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000006;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000020000;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00010001000c4411;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100044411;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002000000018;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000002000000019;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000200000001e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000002000000019;
+  *((unsigned long*)& __m256i_op1[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3870ca8d013e76a0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43ec0a1b2aba7ed0;
+  *((unsigned long*)& __m256i_result[3]) = 0x223d771060c77e19;
+  *((unsigned long*)& __m256i_result[2]) = 0x3870caad013e76b9;
+  *((unsigned long*)& __m256i_result[1]) = 0x223d771060c81cc7;
+  *((unsigned long*)& __m256i_result[0]) = 0x43ec0a3b2aba7ee9;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdbcbdbcbecececec;
+  *((unsigned long*)& __m256i_op0[0]) = 0xdbcbdbcb0000dbcb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2535253514141414;
+  *((unsigned long*)& __m256i_result[2]) = 0x2535253500002535;
+  *((unsigned long*)& __m256i_result[1]) = 0x2535253514141414;
+  *((unsigned long*)& __m256i_result[0]) = 0x2535253500002535;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010000f0000000f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0020000f0000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010000f0000000f;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000504f00002361;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff8f81000040e4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000504f00002361;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff8f81000040e4;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000007ff000007ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000007ff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000007ff000007ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000007ff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000584e00002b60;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000787dffffbf1c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000584e00002b60;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000787dffffbf1c;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010200000000;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fef010000010100;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fef010000010100;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fef010000010100;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fef010000010100;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40b2bf4d30313031;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fffa2bea2be;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40b2bf4d30313031;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fffa2bea2be;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x40b240b330313031;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff5d425d42;
+  *((unsigned long*)& __m256i_result[1]) = 0x40b240b330313031;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff5d425d42;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000100040;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000100080;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff896099cbdbfff1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc987ffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff896099cbdbfff1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc987ffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00769f673424000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x3678000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x00769f673424000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x3678000100000001;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvadda_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000500000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000700000032;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000500000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000700000032;
+  __m256i_out = __lasx_xvadda_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003feec0108022;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fe9c015802c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003feec0108022;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fe9c015802c;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007f124010c022;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f174015c02c;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f124010c022;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f174015c02c;
+  __m256i_out = __lasx_xvadda_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfa15fa15fa15fa14;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x05ea05ea05ea05ec;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020202020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020202020202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000000010000;
+  __m256i_out = __lasx_xvadda_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f0000007f000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f0000007f000000;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe0000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff000000;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000000000;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffcf800fffcf800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffcf800fffcf800;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080000000800;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000400040004;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1090918800000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1090918800000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c80780000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c80780000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1c80780000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1c80780000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004000;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvmax_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5900000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5900000000000000;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000005e02;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff0000000000000;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc7418a023680;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff8845bb954b00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffc7418a023680;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff8845bb954b00;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000002a5429;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_op1[2]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_op1[0]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_result[2]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_result[0]) = 0x556caad9aabbaa88;
+  __m256i_out = __lasx_xvmax_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ffce20;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ffce20;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ee1100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000004560408;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ee1100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000004560408;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000004560420;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000004560420;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffdfffffffdfffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffdfffffffdfffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007f7f817f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007f7f817f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f807f007f7f817f;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffee0000ff4c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ff050000ff3c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000fff90000ff78;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffa80000ff31;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffeeffaf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffeeffaf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffffffeeffaf;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffffeeffaf;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010100f10100fd4;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000d0d8ffffeecf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000383fffffdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000d0d8ffffeecf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000383fffffdf0d;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffd8ffc7ffffdf0d;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffd8ffc7ffffdf0d;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_result[2]) = 0xf0f0f0f0f0f0f0f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f0f0f0f0;
+  *((unsigned long*)& __m256i_result[0]) = 0xf0f0f0f0f0f0f0f0;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x817f11ed81800ff0;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffebeeaaefafb;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffebeeaaeeeeb;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffebeeaaefafb;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffebeeaaeeeeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000003f8000004;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000003f8000004;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000003f8000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000003f8000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff00ff;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000020006;
+  __m256i_out = __lasx_xvmax_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc600000000000000;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fe000000000;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000014402080144;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000400010004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000400010004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000e0001000e;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x800000ff800000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800000ff800000ff;
+  __m256i_out = __lasx_xvmax_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003f800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000040404040;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f433c78;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f433c78;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f433c78;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op0[2]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op0[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op0[0]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff00;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe97c020010001;
+  __m256i_out = __lasx_xvmax_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff00007fff0000;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff97a2;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmax_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000070002000a;
+  __m256i_out = __lasx_xvmax_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000001400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000003c01ff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffff08a7de0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffff07c4170;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffff08a7de0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffff07c4170;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffff08a7de0;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffff07c4170;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffff08a7de0;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffff07c4170;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmax_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000001e;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000001e;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[0]) = 0x1c1b1a191c1b1a19;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000401000000;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[2]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[0]) = 0x0005000500050005;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0a0a0a0a0a0a0a0a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0a0a0a0a0a0a0a0a;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_result[2]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_result[1]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_result[0]) = 0x1717171717171717;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0110000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0110000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0110000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0110000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0110000000000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0110000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0110000000000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0110000000000080;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001400000014;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000003f;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff90000fff9fff9;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ffe00007f000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1616161616161616;
+  *((unsigned long*)& __m256i_result[2]) = 0x161616167fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ffe16167f161616;
+  *((unsigned long*)& __m256i_result[0]) = 0x161616167fffffff;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffd10000006459;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000441000000004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000040400000104;
+  *((unsigned long*)& __m256i_result[3]) = 0x0f0f0f0f0f0f6459;
+  *((unsigned long*)& __m256i_result[2]) = 0x0f0f44100f0f0f0f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0f0f0f0f0f0f0f0f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0f0f0f0f0f0f0f0f;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000e00000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000e00000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000e00000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000e00000080;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000fd0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000fd0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001b00fd0000;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000feb60000b7d0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000feb60000c7eb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000feb60000b7d0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000feb60000c7eb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0707feb60707c7eb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0707feb60707c7eb;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8080808180808093;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80808081808080fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8080808180808093;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80808081808080fb;
+  *((unsigned long*)& __m256i_result[3]) = 0xf5f5f5f5f5f5f5f5;
+  *((unsigned long*)& __m256i_result[2]) = 0xf5f5f5f5f5f5f5fe;
+  *((unsigned long*)& __m256i_result[1]) = 0xf5f5f5f5f5f5f5f5;
+  *((unsigned long*)& __m256i_result[0]) = 0xf5f5f5f5f5f5f5fb;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[2]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[1]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[0]) = 0x0909090909090909;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_result[2]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_result[1]) = 0x1111111111111111;
+  *((unsigned long*)& __m256i_result[0]) = 0x1111111111111111;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000001c;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000001c;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000001c;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000001c;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000005;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,-4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000c7aff7c00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000c7aff7c00;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffd017d00;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000e0000000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000e0000000e;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000600000006;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xeffc000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf064c6098d214127;
+  *((unsigned long*)& __m256i_op0[1]) = 0xeffc000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf064c6098d214127;
+  *((unsigned long*)& __m256i_result[3]) = 0xeffc001800180018;
+  *((unsigned long*)& __m256i_result[2]) = 0xf064c6098d214127;
+  *((unsigned long*)& __m256i_result[1]) = 0xeffc001800180018;
+  *((unsigned long*)& __m256i_result[0]) = 0xf064c6098d214127;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d0d0d0d0d0d0d0d;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000004560420;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000004560420;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000004560420;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff1100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000004560420;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001f0000ffff;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00040000;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00080008000801ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008000800080008;
+  *((unsigned long*)& __m256i_result[1]) = 0x00080008000801ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008000800080008;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffe;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000012;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0a0a0a0a7f0a0a0a;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000300000003;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff400000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff400000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1010101010001000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1010101000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010101010001000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x101010100000000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000007ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001e0007ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001e0007ffff;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000900000009;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_result[2]) = 0x7e00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007e1c7e1c;
+  *((unsigned long*)& __m256i_result[0]) = 0x7e00000000000000;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000004000000fd;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000004000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000004;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000c9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000c9;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,-15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_result[3]) = 0x1818ffff1818ffa3;
+  *((unsigned long*)& __m256i_result[2]) = 0x181818181818185a;
+  *((unsigned long*)& __m256i_result[1]) = 0x1818ffff1818ffa3;
+  *((unsigned long*)& __m256i_result[0]) = 0x181818181818185a;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000008000165a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff8000ffa3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000008000165a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0009000900090009;
+  *((unsigned long*)& __m256i_result[2]) = 0x000900090009165a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0009000900090009;
+  *((unsigned long*)& __m256i_result[0]) = 0x000900090009165a;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[2]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0007000700070007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0007000700070007;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0707070707070707;
+  *((unsigned long*)& __m256i_result[2]) = 0x0707070707070707;
+  *((unsigned long*)& __m256i_result[1]) = 0x0707070707070707;
+  *((unsigned long*)& __m256i_result[0]) = 0x0707070707070707;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000b;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000001f;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000081f20607a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000800000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000081f20607a;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_result[2]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_result[1]) = 0x0018001800180018;
+  *((unsigned long*)& __m256i_result[0]) = 0x0018001800180018;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d0d0d0d0d0d0d0d;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffff5;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff5;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000013;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000013;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001700000017;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001700000017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001700000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001700000017;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[3]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[2]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[1]) = 0x2a2a2a2a2a2a2a2a;
+  *((unsigned long*)& __m256i_result[0]) = 0x2a2a2a2a2a2a2a2a;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmaxi_w(__m256i_op0,-2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_result[3]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_result[2]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_result[1]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_result[0]) = 0x07fed3c8f7ad28d0;
+  __m256i_out = __lasx_xvmaxi_wu(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0017001700176d6d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0017001700176d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0017001700176d6d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0017001700176d6d;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000014;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000014;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000014;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x001fffffffe00011;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x001fffffffe00011;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvmaxi_hu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[2]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[0]) = 0x1c1c1c1c1c1c1c1c;
+  __m256i_out = __lasx_xvmaxi_bu(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000014;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000014;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000014;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000014;
+  __m256i_out = __lasx_xvmaxi_du(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007b007e;
+  __m256i_out = __lasx_xvmaxi_d(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_result[3]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_result[2]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x000a000a000a000a;
+  *((unsigned long*)& __m256i_result[0]) = 0x000a000a000a000a;
+  __m256i_out = __lasx_xvmaxi_h(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0c0c0c0c0c0c0c0c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0c0c0c0c0c0c0c0c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0c0c0c0c0c0c0c0c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0c0c0c0c0c0c0c0c;
+  __m256i_out = __lasx_xvmaxi_b(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc58a0a0a07070706;
+  *((unsigned long*)& __m256i_op1[2]) = 0x006b60e4180b0023;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1b39153f334b966a;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf1d75d79efcac002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x006b60e40e0e0e0e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x36722a7e66972cd6;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8001000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000800080000728;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8001800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x800080008000b8f1;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000ffff8000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff80008000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800080008000b8f1;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffbf7f7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffe651bfff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000010100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000001000100;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f3280000dfff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1d1d1d1ddd9d9d1d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1d1d1d1d1d1d1d1d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1d1d1d1d046fdd1d;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001d1d00001d1d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001d1d00007f79;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001d1d00001d1d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001d1d0000dd1d;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x003ff18080010201;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x003ff18080010201;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000f18080010000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000f18080010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f017f807f017d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f017f807f017f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017f0000017d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000017f0000017f;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf000f00000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf000f00000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xf000f00000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xf000f00000000001;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000004040104;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffd1108199;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000714910f9;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffd10000006459;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000441000000004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000040400000104;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffd10000000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffd1108199;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000104;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x60f02081c1c4ce2c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8008000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x60f02081c1c4ce2c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8008000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010183f9999b;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01010101d58f43c9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010183f9999b;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x01010101d58f43c9;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000101ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000d24;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff00ff00ee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00ff00ee;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f00ff007f00ff;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00040000;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fd00ffff02ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001fffeff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff02ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00fe00feff02ff;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000fffe;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007f7f7f7f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000001fffe;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f70000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f70000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007c000000810081;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007c000000810081;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000180007fe8;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00010e0d00009e0e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00009000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000e0e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00009000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000033;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff81ff7d;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffe36780;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_result[3]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_result[2]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_result[1]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_result[0]) = 0x8800c800c800c801;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01ff0020ff1f001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fe1ffe0ffe1ffe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff1f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffe1ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff1f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffe1ffe0;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00011ffb0000bee1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001010600000106;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001010600000106;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff0000ffff;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002080100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002080100;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4545454545454545;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff0000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff0000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffd5d5ffffd5d6;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffd5d5ffffd5d6;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f0000000f00010;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff0ff00fff0ff10;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f0000000f00010;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff0ff00fff0ff10;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff00ffff8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff00ffff8000;
+  __m256i_out = __lasx_xvmin_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000017f7f7f7f;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmin_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffbfffffffb;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffbfffffffb;
+  __m256i_out = __lasx_xvmin_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000001de2dc20;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000001de2dc20;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000400100004001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000400100004001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000400100004001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000400000003ffb;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000400100004001;
+  __m256i_out = __lasx_xvmin_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmin_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf96d674800000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x44a4330e2c7116c0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x14187a7822b653c0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfbe0b866962b96d0;
+  *((unsigned long*)& __m256i_result[3]) = 0xf90c0c0c00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0ca40c0c0c0c0cc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0c0c0c0c0cb60cc0;
+  *((unsigned long*)& __m256i_result[0]) = 0xfbe0b80c960c96d0;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010bfc80010bf52;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff1bfca0011bfcb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010bfc80010bf52;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff1bfca0011bfcb;
+  *((unsigned long*)& __m256i_result[3]) = 0xf5f5bfc8f5f5bff5;
+  *((unsigned long*)& __m256i_result[2]) = 0xf5f1bfcaf5f5bfcb;
+  *((unsigned long*)& __m256i_result[1]) = 0xf5f5bfc8f5f5bff5;
+  *((unsigned long*)& __m256i_result[0]) = 0xf5f1bfcaf5f5bfcb;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,-11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[0]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000004;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff6fff6fff6fff6;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,-10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1fffffff1fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0383634303836343;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1fffffff1fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0383634303836343;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002ffff0002ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002ffff0002ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000200020002;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_hu(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000f7bc0001f7bd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000f93b0000017c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000f7bc0001f7bd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f93b0000017b;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff2f7bcfff2f7bd;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff2f93bfff2fff2;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff2f7bcfff2f7bd;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff2f93bfff2fff2;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff0e400;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x327f010101010102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x327f010101010102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffff4;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff4;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,-12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0007000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0007000000000000;
+  __m256i_out = __lasx_xvmini_hu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff2fffffff2;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff2fffffff2;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff2fffffff2;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff2fffffff2;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m256i_result[2]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m256i_result[1]) = 0xf8f8f8f8f8f8f8f8;
+  *((unsigned long*)& __m256i_result[0]) = 0xf8f8f8f8f8f8f8f8;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003ddd80007bbb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000002222;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003ddd80007bbb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001700170017;
+  __m256i_out = __lasx_xvmini_hu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff9fff9fff9fff9;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff3fff3fff3fff3;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x18);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_result[3]) = 0x0d0d0d0d00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0d0d0d0d00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d0d0d0d0d0d0d0d;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000aaabffff;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff47b4ffff5878;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000b84b0000a787;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff47b4ffff5878;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000b84b0000a787;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff07b4ffff0707;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000b8070000a787;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff07b4ffff0707;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000b8070000a787;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000000a;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7f7f7f7f7f7f7f7;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff8fffffff8;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff8fffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff8fffffff8;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff8fffffff8;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,-8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1b1b1b1b1b1b1b1b;
+  *((unsigned long*)& __m256i_result[2]) = 0x1b1b1b1b1b1b1b1b;
+  *((unsigned long*)& __m256i_result[1]) = 0x1b1b1b1b1b1b1b1b;
+  *((unsigned long*)& __m256i_result[0]) = 0x1b1b1b1b1b1b1b1b;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001e0000001e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001e0000001e;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000c;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1e1e1e0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1e1e1e0000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1e1e1e0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1e1e1e0000000000;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001400000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001400000000;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000017;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_result[2]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_result[1]) = 0xf3f3f3f3f3f3f3f3;
+  *((unsigned long*)& __m256i_result[0]) = 0xf3f3f3f3f3f3f3f3;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,-13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff7fffffff7;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff7fffffff7;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff7fffffff7;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff7fffffff7;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,-9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0a0a0a0a00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0a0a000000000a0a;
+  *((unsigned long*)& __m256i_result[0]) = 0x0a0a0a0a00000000;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffe400000707;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000af100001455;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffe400000707;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000af100001455;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff2fff2fff2fff2;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff2fff2fff2fff2;
+  *((unsigned long*)& __m256i_result[1]) = 0xfff2fff2fff2fff2;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff2fff2fff2fff2;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,-14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[0]) = 0xf9f9f9f9f9f9f9f9;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,-7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc30e0000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc30e0000ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc3030000ff800000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc3030000ff800000;
+  __m256i_out = __lasx_xvmini_b(__m256i_op0,3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000800400010006d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008001c0010001c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008001c0010001c;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ff007f007f00;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ff007f007f00;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,-5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff61010380;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff61010380;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000006;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_du(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_h(__m256i_op0,11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_hu(__m256i_op0,0x1e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff0fffffff0;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,-16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_hu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmini_w(__m256i_op0,-1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_d(__m256i_op0,12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmini_bu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fffc8027;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fffc7ff1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fffc8027;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fffc7ff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000014;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000014;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000014;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000014;
+  __m256i_out = __lasx_xvmini_wu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffd1b24e00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcea54ffff29a8;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff8cad88ff8306b4;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffc1278fffce4c8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0e2d5626ff75cdbc;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5db4b156e2002a78;
+  *((unsigned long*)& __m256i_op1[1]) = 0xeeffbeb03ba3e6b0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0c16e25eb28d27ea;
+  *((unsigned long*)& __m256i_result[3]) = 0xf96d674800000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x44a4330e2c7116c0;
+  *((unsigned long*)& __m256i_result[1]) = 0x14187a7822b653c0;
+  *((unsigned long*)& __m256i_result[0]) = 0xfbe0b866962b96d0;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffff01ffffff08;
+  *((unsigned long*)& __m256i_op1[2]) = 0x43700f0100003008;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffff01ffffff08;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43700f0100003008;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000f8;
+  *((unsigned long*)& __m256i_result[2]) = 0xbc8ff0ffffffcff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000f8;
+  *((unsigned long*)& __m256i_result[0]) = 0xbc8ff0ffffffcff8;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x353bb67af686ad9b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x353bb67af686ad9b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0200000200000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2c27000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0200000200000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2c27000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1cfd000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1cfd000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000180000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc080ffff0049ffd2;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000fffeffb9ff9d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffd2;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff8000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000080000000;
+  __m256i_out = __lasx_xvmul_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001900000019;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fff003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fff003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000627;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000627;
+  __m256i_out = __lasx_xvmul_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffd5a98;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffd5a98;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007f3a40;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x120e120dedf1edf2;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x120e120dedf1edf2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000907;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1010000010100000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010000010100000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010000010100000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010000010100000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007fff00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0040000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fff00000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffefffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffe0001fffe0003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdf00000052a00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5b7f00ff5b7f00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffff30000000b;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffff30000000b;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbc30c40108a45423;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbc263e0e5d00e69f;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbc30c40108a4544b;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbc20e63aa8b9663f;
+  *((unsigned long*)& __m256i_result[3]) = 0x71860bf35f0f9d81;
+  *((unsigned long*)& __m256i_result[2]) = 0x720ed94a46f449ed;
+  *((unsigned long*)& __m256i_result[1]) = 0x71860bf35f0f9f39;
+  *((unsigned long*)& __m256i_result[0]) = 0x72544f0e6e95cecd;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x111ebb784f9c4100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c386546809f3b50;
+  *((unsigned long*)& __m256i_op0[1]) = 0x111ebb784f9bf1ac;
+  *((unsigned long*)& __m256i_op0[0]) = 0x21f6050d955d3f68;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbab0c4b000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xaa0ac09800000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00bf00bf00bf00bf;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00bf00bf00bf00bf;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00bf00bf00bf00bf;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00bf00bf00bf00bf;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000011;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000088;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000088;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc0008000c0008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80008000fff98000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f8000000000008;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000800f800000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f8000000000008;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000800f800000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe3f7fff7fffcbd08;
+  *((unsigned long*)& __m256i_result[2]) = 0x0dbfa28000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xe3f7fff7fffcbd08;
+  *((unsigned long*)& __m256i_result[0]) = 0x0dbfa28000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_result[2]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_result[1]) = 0x7070545438381c1c;
+  *((unsigned long*)& __m256i_result[0]) = 0x7070545438381c1c;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1400080008000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmul_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc5c085372cfabfba;
+  *((unsigned long*)& __m256i_op0[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0658f2dc0eb21e3c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000019410000e69a;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf259905a0c126604;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000883a00000f20;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6d3c2d3aa1c82947;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000f647000007d6;
+  *((unsigned long*)& __m256i_result[2]) = 0x031b358c021ee663;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000faaf0000f9f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x02b4fdadfa9704df;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf7ffffffffffff1f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbffffffffffffeff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf7ffffffffffff1f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbffffffffffffeff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff01c000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000f1000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000001341c4000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001000310000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000033e87ef1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000002e2100;
+  __m256i_out = __lasx_xvmuh_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000aaaa00008bfe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000aaaa0000aaaa;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff5556aaaa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff5556aaaa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op1[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffe00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffefffefffd;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000045f3fb;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000045f3fb;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_result[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfe8bfe0efe8bfe12;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfe8bfe0efe8bfe12;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op0[1]) = 0xd207e90001fb16ef;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc8eab25698f97e90;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdbc8000000003fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00020002ff820002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00020002ff820002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40efffe09fa88260;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6b07ca8e013fbf01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40efffe09fa7e358;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80ce32be3e827f00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x86ff76ffff4eff42;
+  *((unsigned long*)& __m256i_op1[2]) = 0x86ffffffffff9eff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x86ff76ffff4effff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x86ff32ffaeffffa0;
+  *((unsigned long*)& __m256i_result[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x3870ca8d013e76a0;
+  *((unsigned long*)& __m256i_result[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_result[0]) = 0x43ec0a1b2aba7ed0;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000007fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000036a37;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000007fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000004def9;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffe06003fc000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffe05fc47b400;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffe06003fc000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fffe0001;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffe0001;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7575ffff75757595;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7575ffff7575f575;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7575ffff75757595;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7575ffff7575f575;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3aadec4f6c7975b1;
+  *((unsigned long*)& __m256i_result[2]) = 0x3abac5447fffca89;
+  *((unsigned long*)& __m256i_result[1]) = 0x3aadec4f6c7975b1;
+  *((unsigned long*)& __m256i_result[0]) = 0x3abac5447fffca89;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000003f;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000027;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000027;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000007ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffc020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffc020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001400000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001400000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5fa0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5fa0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0c6a240000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0f00204000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0c6a240000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0f00204000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x04a3000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x04a3000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000fdfcfda8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000e2821d20ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000fdfcfda8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000e2821d20ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_result[2]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_result[1]) = 0x000408080c111414;
+  *((unsigned long*)& __m256i_result[0]) = 0x000408080c111414;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf9f9f9f9f9f9f9f9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff8000fffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001fffe00017fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff8000fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001fffe00017fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000007f00fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000fe0000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000007f00fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000fe0000007f;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffe00000ffe00000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffe00000ffe00000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff8900000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff8900000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000001fff0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000feff0001ffb8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000fafe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000fafe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffffff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ff8000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffffff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ff8000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000010000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000010000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmuh_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4e5cba76cdbaaa78;
+  *((unsigned long*)& __m256i_op0[2]) = 0xce68fdeb4e33eaff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4e45cc2dcda41b30;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4ccb1e5c4d6b21e4;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x44bb2cd3a35c2fd0;
+  *((unsigned long*)& __m256i_result[0]) = 0xca355ba46a95e31c;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf96d674800000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x44a4330e2c7116c0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x14187a7822b653c0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfbe0b866962b96d0;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffd1b24e00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffcea54ffff29a8;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff8cad88ff8306b4;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffc1278fffce4c8;
+  *((unsigned long*)& __m256i_result[3]) = 0xebfd15f000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01700498ff8f1600;
+  *((unsigned long*)& __m256i_result[1]) = 0xf520c7c024221300;
+  *((unsigned long*)& __m256i_result[0]) = 0x00802fd0ff540a80;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffc81aca;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003a0a9512;
+  *((unsigned long*)& __m256i_op0[1]) = 0x280ac9da313863f4;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe032c739adcc6bbd;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x006b58e20e1e0e0f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3672227c66a72cd7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000003594;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000082fb80e;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000c7e8;
+  *((unsigned long*)& __m256i_result[0]) = 0x1ad6119c12def7bb;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000f20;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000009f0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6651bfff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6651bfff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffe0001c3fe4001;
+  *((unsigned long*)& __m256i_result[0]) = 0x8ffe800100000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010100000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01fe02;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01fe02;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000ff;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffff8ffffff08;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00f800ffcff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffff8ffffff08;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00f800ffcff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256i_result[2]) = 0x0045b8ae81bce1d8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000003868686a20;
+  *((unsigned long*)& __m256i_result[0]) = 0x0045b8ae81bce1d8;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3ff1808001020101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3ff1808001020101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000ff7f1080ef8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0100000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000ff7f1080ef8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0100000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x003ff18080010201;
+  *((unsigned long*)& __m256i_result[2]) = 0x0100000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x003ff18080010201;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffff90ffffff81;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffff90ffffff81;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007f000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000505;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6c6c6c6c6c6c6c6c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202000002020202;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202000002010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202000002020202;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202000002020000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe000000ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe000001fe0000;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf800f800f800c000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf800f800f800a000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff00ffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfff8080000004000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000080000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff8080000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000060000108;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001060005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fef0001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff01ff010000fff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff19;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff02ff020001fffa;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000100010001fffa;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x00fe01ff0006ffcf;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000e62f8f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00fe02fe0006ffd6;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000006ffd6;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff0e400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80000000fff0e400;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff01c000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000f1000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f0000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc2c2c2c2c2c29cc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc2c2c2c2c2c29cc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01fe04;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01fe04;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01010101010000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffef;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0100feff0100eeef;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000001010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0100feff00feef11;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000001010;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfbba01c0003f7e3f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfbd884e7003f7e3f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff874dc687870000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe367cc82f8989a;
+  *((unsigned long*)& __m256i_result[2]) = 0x4f90000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc3aaa8d58f43c8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a5429;
+  *((unsigned long*)& __m256i_op1[3]) = 0x417e01f040800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x299d060000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x417e01f040800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x29108b0000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0408040800008003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0408040800008003;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0408040800008002;
+  *((unsigned long*)& __m256i_result[0]) = 0xfbf7fbf7ffff7ffd;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000801380f380fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000801380f300fb;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff7fedffffff05;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0707feb60707b7d0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x45baa7ef6a95a985;
+  *((unsigned long*)& __m256i_result[3]) = 0x0707b7cff8f84830;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000354ad4c28;
+  *((unsigned long*)& __m256i_result[1]) = 0x0707b7cff8f84830;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000354ad4c28;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00d5007f00ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00d5007f00ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000000d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffbdff3cffbdff44;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000001dc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001dc;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc192181230000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc192181230000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff00ff00ee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00ff00ee;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffce;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000fc7c;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffce;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000fc7c;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x04080c1014182d35;
+  *((unsigned long*)& __m256i_result[2]) = 0x716d696573765161;
+  *((unsigned long*)& __m256i_result[1]) = 0x04080c1014182d35;
+  *((unsigned long*)& __m256i_result[0]) = 0x716d696573765161;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffe0000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffdfffffffdfffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffdfffffffdfffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020000000200001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020000000200001;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x223d76f09f3881ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3870ca8d013e76a0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x223d76f09f37e357;
+  *((unsigned long*)& __m256i_op0[0]) = 0x43ec0a1b2aba7ed0;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff8910ffff7e01;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff3573ffff8960;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff8910ffff1ca9;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffff5e5ffff8130;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2c2c2c2c2c2c2c2c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0d0d0d0d00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0d0d0d0d00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x02407a3c00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0d0cf2f30d0cf2f3;
+  *((unsigned long*)& __m256i_result[1]) = 0x02407a3c00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d0cf2f30d0cf2f3;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000022ffdd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000022ffdd;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000f4b6ff23;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000f4b6ff23;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x201fdfe0201fdfe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x201fdfe0201fdfe0;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x201fdfe0201fdfe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x201fdfe0201fdfe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff47b4ffff5878;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000b84b0000a787;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff47b4ffff5878;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000b84b0000a787;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffff000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffff2;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1010101010001000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x101010100000000e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000fe;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff01feffff01ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff01feffff01ff;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f3f0000400d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f3f0000400d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000fd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000062d4;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000004e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fffe00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fffe00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x5fa0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x5fa0000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x5fa0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x5fa0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00ffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00001ff800000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd8d8c00000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00001ff800000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd8d8c00000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000000d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000001;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3f80000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3f80000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0080000000800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ef;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000155b200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000b70000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ff03fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffec75c2d209f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ff03fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffec75c2d209f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000008b;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff010000008b;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010100000101;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff1b00e4;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000004;
+  __m256i_out = __lasx_xvmulwev_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000a;
+  __m256i_out = __lasx_xvmulwev_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_result[3]) = 0x0807f7f80807f7f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0807f7f80807f7f8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0807f7f80807f7f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0807f7f80807f7f8;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfd12fd12fd12fd12;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwev_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[3]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op1[2]) = 0x03acfc5303260e80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x03af03af03af03af;
+  *((unsigned long*)& __m256i_op1[0]) = 0x03acfc5303260e80;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_result[2]) = 0x000f9bb562f56c80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_result[0]) = 0x000f9bb562f56c80;
+  __m256i_out = __lasx_xvmulwev_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff01ff68;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000070ff017de6;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff01ff68;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000070ff017de6;
+  *((unsigned long*)& __m256i_op1[3]) = 0x761ed60b5d7f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdc9938afafe904f1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x761ed60b5d7f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdc9938afafe904f1;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00004c9000e9d886;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00004c9000e9d886;
+  __m256i_out = __lasx_xvmulwev_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000015d050192cb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x028e509508b16ee9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000033ff01020e23;
+  *((unsigned long*)& __m256i_op0[0]) = 0x151196b58fd1114d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff0000ffff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff000000ffffff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffffffff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fffffaff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffd7200fffff74f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000702f;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007ffe81fdfe03;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7a7cad6eca32ccc1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7a7cad6efe69abd1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7a7cad6eca32ccc1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7a7cad6efe69abd1;
+  *((unsigned long*)& __m256i_result[3]) = 0xff86005300360034;
+  *((unsigned long*)& __m256i_result[2]) = 0xff86005300020055;
+  *((unsigned long*)& __m256i_result[1]) = 0xff86005300360034;
+  *((unsigned long*)& __m256i_result[0]) = 0xff86005300020055;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0ff8010000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0ff8010000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01480000052801a2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffdcff64;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbea2e127c046721f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1729c073816edebe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xde91f010000006f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5ef1f90efefaf30d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00170000028500de;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fd02f20d;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001175f10e4330e8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff8f0842ff29211e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffff8d9ffa7103d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000000f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00aa000000ac00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfe7ffffffeffffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfe7ffffffeffffc0;
+  __m256i_out = __lasx_xvmulwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2c27000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x2c27000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fc03e000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fc03e000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fffb0402fddf20;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fffb0402fddf20;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001fbf9fbe29f52;
+  *((unsigned long*)& __m256i_result[2]) = 0x5b409c0000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001fbf9fbe29f52;
+  *((unsigned long*)& __m256i_result[0]) = 0x5b409c0000000000;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0408040800008003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x04080408fff87803;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0707b7cff8f84830;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000354ad4c28;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0707b7cff8f84830;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000354ad4c28;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fffd5a98;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000007f3a40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000007f3a40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000d24;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010000000100000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000073333333;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000073333333;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffffa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf01010153a10101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5b7f01ff5b7f10ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf01010153a10101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5b7f01ff5b7f10ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f800f800f800f8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0018181800181818;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f800f800f800f8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0018181800181818;
+  *((unsigned long*)& __m256i_result[3]) = 0x001f1f3e3e1f1f00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0003060909060300;
+  *((unsigned long*)& __m256i_result[1]) = 0x001f1f3e3e1f1f00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0003060909060300;
+  __m256i_out = __lasx_xvmulwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000017f00007f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007f0000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000fd;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff810000000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00153f1594ea02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffffffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff15c1ea95ea02ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x71860bf35f0f9d81;
+  *((unsigned long*)& __m256i_op0[2]) = 0x720ed94a46f449ed;
+  *((unsigned long*)& __m256i_op0[1]) = 0x71860bf35f0f9f39;
+  *((unsigned long*)& __m256i_op0[0]) = 0x72544f0e6e95cecd;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff8910ffff7e01;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff3573ffff8960;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff8910ffff1ca9;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffff5e5ffff8130;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffcb423a587053;
+  *((unsigned long*)& __m256i_result[2]) = 0x6d46f43e71141b81;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffcb423a584528;
+  *((unsigned long*)& __m256i_result[0]) = 0x9bdf36c8d78158a1;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4ffc3f7800000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3fc03f6400000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4ffc3f7800000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3fc03f6400000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x4eb13ec100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3ec13ec100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x4eb13ec100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3ec13ec100000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000007e8080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007e8092;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000007e8080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000007e8092;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x800000007fff0001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80000000ff7f0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x800000007fff0001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80000000ff7f0001;
+  *((unsigned long*)& __m256i_result[3]) = 0xbfffffffffff8000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbfff800080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xbfffffffffff8000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbfff800080000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000c0000005;
+  *((unsigned long*)& __m256i_op0[0]) = 0x21f8c3c4c0000005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000043efffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000043efffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbfffa004fffd8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbfffa004fffd8000;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op0[1]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff0020001d001f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmulwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x82ff902d83000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f80000082fe0bd9;
+  *((unsigned long*)& __m256i_result[3]) = 0xc008fa01c0090000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3f804000c008f404;
+  *((unsigned long*)& __m256i_result[1]) = 0xc008fa01c0090000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3f804000c008f404;
+  __m256i_out = __lasx_xvmulwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fffe00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000fffe00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ca0000fff80000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ca0000fff80000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffe07de080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000001f20607a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffe07de080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000001f20607a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000017fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000017fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfe01fe01fd02fd02;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_result[1]) = 0xfe01fe01fd02fd02;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000003fc03fc0;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000400008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000006d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000400008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_hu_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001ff03fe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffec75c2d209f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001ff03fe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffec75c2d209f;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_w_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800000ff000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x90007fff90008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0ffffffe90008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4800408ef07f7f01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0800000eeffffe02;
+  __m256i_out = __lasx_xvmulwod_d_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffe00000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000007f8;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000002de;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000007f8;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000002de;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000007f7;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffff808;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000007f7;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffff808;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010080;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_d_wu_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf5fffc00fc000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_q_du_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000400000004000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmulwod_h_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfd02fd02fd02fd02;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x04f104f104f104f1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x04f104f104f104f1;
+  __m256i_out = __lasx_xvmulwod_h_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000fffd0003;
+  __m256i_out = __lasx_xvmulwod_q_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x34ec5670cd4b5ec0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4f111e4b8e0d7291;
+  *((unsigned long*)& __m256i_op1[1]) = 0xeaa81f47dc3bdd09;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0e0d5fde5df99830;
+  *((unsigned long*)& __m256i_op2[3]) = 0x80c72fcd40fb3bc0;
+  *((unsigned long*)& __m256i_op2[2]) = 0x84bd087966d4ace0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x26aa68b274dc1322;
+  *((unsigned long*)& __m256i_op2[0]) = 0xe072db2bb9d4cd40;
+  *((unsigned long*)& __m256i_result[3]) = 0x044819410d87e69a;
+  *((unsigned long*)& __m256i_result[2]) = 0x21d3905ae3e93be0;
+  *((unsigned long*)& __m256i_result[1]) = 0x5125883a30da0f20;
+  *((unsigned long*)& __m256i_result[0]) = 0x6d7b2d3ac2777aeb;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffeff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffeff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffff001f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffff001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000000000ffe0;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000001e18;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffff1f;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffeff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffff1f;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffeff;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fffe00010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000fffe00010001;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000002e0000ffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000002e0000fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_result[2]) = 0x000607f700000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x1717171717171717;
+  *((unsigned long*)& __m256i_result[0]) = 0x000607f700000001;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000003f00000000;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x370036db92c4007e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x371462137c1e0049;
+  *((unsigned long*)& __m256i_op0[1]) = 0x800000fe7e02fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x371c413b999d04b5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffb7ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffb5ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffff00ff00ffff00;
+  *((unsigned long*)& __m256i_op2[2]) = 0xff000000ff00ff00;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffff00ffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xff00000000ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x37fe365b920d007e;
+  *((unsigned long*)& __m256i_result[2]) = 0x381462137d1e0149;
+  *((unsigned long*)& __m256i_result[1]) = 0x80ff00fe7e020060;
+  *((unsigned long*)& __m256i_result[0]) = 0x381c413b99cd04dd;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op1[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_op2[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op2[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op2[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_result[3]) = 0xd100645944100004;
+  *((unsigned long*)& __m256i_result[2]) = 0xd1908469108400d1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000404040104;
+  *((unsigned long*)& __m256i_result[0]) = 0xd1108199714910f9;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x61f1000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0108000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x61f1a18100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0108000000000000;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000055555555;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000004;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000055555555;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000004;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2aaaaaaa2aaaaaab;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x2aaaaaaa2aaaaaab;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_result[0]) = 0x7c007c007c007c00;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000fd00ffff02fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001fffeff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00007f7f00007f00;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffee0000ff4c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ff050000ff3c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000fff90000ff78;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffa80000ff31;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0101010127272525;
+  *((unsigned long*)& __m256i_op2[2]) = 0x23a2a121179e951d;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0101010127272525;
+  *((unsigned long*)& __m256i_op2[0]) = 0x23a2a121179e951d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvmadd_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fefffffffffffff;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x008e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x008e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0007ffff0007ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0007ffff0007ffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x008e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x008e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007000008e700000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007000008e700000;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op2[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op2[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op2[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100000100000001;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmadd_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvmadd_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000080040;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00009fff00002001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00009fff00002001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0001497c98ea4fca;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0001497c98ea4fca;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000006715b036;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000006715b036;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmadd_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007f80;
+  __m256i_out = __lasx_xvmadd_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d6d6d;
+  __m256i_out = __lasx_xvmadd_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f80ffffff808000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f80ffffff808000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001f001fffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffe0ffe000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001f001fffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffe0ffe000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe0ffe000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fa0001fff808000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe0ffe000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fa0001fff808000;
+  __m256i_out = __lasx_xvmadd_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x074132a240000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff0008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffff0001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00007ffe81fdfe03;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[0]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[0]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[0]) = 0x555555ab555555ab;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000017f0000017d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000017f0000017f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002e0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002e0000fffe;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000002e0000ffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000002e0000fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000f7bc0001f7bd;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000f93b0000017c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000f7bc0001f7bd;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000f93b0000017b;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1410141014101410;
+  *((unsigned long*)& __m256i_result[2]) = 0x1410141014101410;
+  *((unsigned long*)& __m256i_result[1]) = 0x1410141014101410;
+  *((unsigned long*)& __m256i_result[0]) = 0x1410141014101410;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdb801b6d0962003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdb8a3109fe0f0024;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000007fff01ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xdb8e209d0cce025a;
+  *((unsigned long*)& __m256i_op1[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op1[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffcc8000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000007dfdff4b;
+  *((unsigned long*)& __m256i_result[3]) = 0xdb801b6d0962003f;
+  *((unsigned long*)& __m256i_result[2]) = 0xdb8a3109fe0f0024;
+  *((unsigned long*)& __m256i_result[1]) = 0x9a7f997fff01ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xbe632a4f1c3c5653;
+  __m256i_out = __lasx_xvmsub_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x01010101010000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000004800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000004800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000004800000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000004800000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_result[3]) = 0x7b7b7b7b80000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xcacacb1011040500;
+  *((unsigned long*)& __m256i_result[1]) = 0x7b7b7b7b80000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xcacacb1011040500;
+  __m256i_out = __lasx_xvmsub_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffefffffffe;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000001a00;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_result[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfe7fffecfe7fffec;
+  *((unsigned long*)& __m256i_result[0]) = 0xff80000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_result[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0xa020202020206431;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff00000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fff80fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff80fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff80007ffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff007fff80fe;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f807f007f7f817f;
+  __m256i_out = __lasx_xvmsub_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff457db03f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000457db03e;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff457db03f;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fe200000fe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fe200000fe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x001ffffe00200000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x001ffffe00200000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fe200000fe1f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fe200000fe1f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000009e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000009e;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffff0078ffff0078;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffff0078ffff0078;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff8;
+  __m256i_out = __lasx_xvmsub_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d6d6d;
+  __m256i_out = __lasx_xvmsub_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x34ec5670cd4b5ec0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4f111e4b8e0d7291;
+  *((unsigned long*)& __m256i_op0[1]) = 0xeaa81f47dc3bdd09;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0e0d5fde5df99830;
+  *((unsigned long*)& __m256i_op1[3]) = 0x67390c19e4b17547;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbacda0f96d2cec01;
+  *((unsigned long*)& __m256i_op1[1]) = 0xee20ad1adae2cc16;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5a2003c6a406fe53;
+  *((unsigned long*)& __m256i_op2[3]) = 0x80c72fcd40fb3bc0;
+  *((unsigned long*)& __m256i_op2[2]) = 0x84bd087966d4ace0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x26aa68b274dc1322;
+  *((unsigned long*)& __m256i_op2[0]) = 0xe072db2bb9d4cd40;
+  *((unsigned long*)& __m256i_result[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_result[2]) = 0x5464fbfc416b9f71;
+  *((unsigned long*)& __m256i_result[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d8264202b8ea3f0;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0003ff540000081c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0003ffd00003fd38;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001ffaa0000040e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000716800007bb6;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001ffe80001fe9c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000228200001680;
+  *((unsigned long*)& __m256i_op2[3]) = 0x372e9d75e8aab100;
+  *((unsigned long*)& __m256i_op2[2]) = 0xc5c085372cfabfba;
+  *((unsigned long*)& __m256i_op2[1]) = 0x31730b5beb7c99f5;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0658f2dc0eb21e3c;
+  *((unsigned long*)& __m256i_result[3]) = 0x002e4db200000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000315ac0000d658;
+  *((unsigned long*)& __m256i_result[1]) = 0x00735278007cf94c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0003ed8800031b38;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff0000ffff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff000000ffffff00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffffffff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01fa022a01a401e5;
+  *((unsigned long*)& __m256i_op1[2]) = 0x030d03aa0079029b;
+  *((unsigned long*)& __m256i_op1[1]) = 0x024c01f901950261;
+  *((unsigned long*)& __m256i_op1[0]) = 0x008102c2008a029f;
+  *((unsigned long*)& __m256i_op2[3]) = 0x002e4db200000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000315ac0000d658;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00735278007cf94c;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0003ed8800031b38;
+  *((unsigned long*)& __m256i_result[3]) = 0x01a72334ffff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xff4f6838ff937648;
+  *((unsigned long*)& __m256i_result[1]) = 0x00a2afb7fff00ecb;
+  *((unsigned long*)& __m256i_result[0]) = 0xffce110f004658c7;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xebfd15f000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01700498ff8f1600;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf520c7c024221300;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00802fd0ff540a80;
+  *((unsigned long*)& __m256i_op1[3]) = 0xebfd15f000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01700498ff8f1600;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf520c7c024221300;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00802fd0ff540a80;
+  *((unsigned long*)& __m256i_op2[3]) = 0xf96d674800000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x44a4330e2c7116c0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x14187a7822b653c0;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfbe0b866962b96d0;
+  *((unsigned long*)& __m256i_result[3]) = 0xebfd15f000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x015c6a7facc39600;
+  *((unsigned long*)& __m256i_result[1]) = 0xfa070a51cbd95300;
+  *((unsigned long*)& __m256i_result[0]) = 0x00c7463075439280;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003a099512;
+  *((unsigned long*)& __m256i_op0[1]) = 0x280ac9da313763f5;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe032c738adcc6bbf;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000ffff00010000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0001000100020001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000fffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003a099512;
+  *((unsigned long*)& __m256i_result[1]) = 0x280ac9da313763f5;
+  *((unsigned long*)& __m256i_result[0]) = 0xe032c738adcc6bbf;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000eef14fe8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202020201010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000eef14fe8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202020201010000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfe02fe02fee5fe22;
+  *((unsigned long*)& __m256i_op2[0]) = 0xff49fe4200000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000eef14fe8;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffe928f1313c9cc;
+  *((unsigned long*)& __m256i_result[0]) = 0x4244020201010000;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff7fff7fff;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff7fff;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff01ff01ff01;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0555550000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0555550000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_result[3]) = 0x0555550000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0555550000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000fb8000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000fb8000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[2]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[0]) = 0x0005000500050005;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff0001ff04;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff02a0fefc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000cfefd;
+  *((unsigned long*)& __m256i_op1[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5ff00007fff9fff3;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffff7fffbfefa;
+  *((unsigned long*)& __m256i_result[2]) = 0xff1eff1902a0fea4;
+  *((unsigned long*)& __m256i_result[1]) = 0xff10000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff10fff9ff13fd17;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfafafafafafafafa;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000fefefe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xf9fbf9fbf9fbf9fb;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0xfdfffdfffdfffdff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff01fffffdff;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x807f807f00000380;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007380;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc03fc03f000001c0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000001c0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_result[3]) = 0x807f807f00000380;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007380;
+  *((unsigned long*)& __m256i_result[1]) = 0xc03fc03f000001c0;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001c0;
+  __m256i_out = __lasx_xvmaddwev_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01fe01fe00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1086658a18ba3594;
+  *((unsigned long*)& __m256i_op0[2]) = 0x160fe9f000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1086658a18ba3594;
+  *((unsigned long*)& __m256i_op0[0]) = 0x160fe9f000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe161616161614f61;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000616100004f61;
+  *((unsigned long*)& __m256i_result[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_result[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_result[0]) = 0x4df5b1a3ed5e02c1;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000100000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000100000000;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffffff6;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffff6;
+  *((unsigned long*)& __m256i_op2[3]) = 0x3f3f3f3f3f3f3f3f;
+  *((unsigned long*)& __m256i_op2[2]) = 0x3f3f3f3f3f3f3f3f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000003f3f3f3f;
+  *((unsigned long*)& __m256i_op2[0]) = 0x3f3f3f3f00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_result[2]) = 0xc6c6c6c68787878a;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_result[0]) = 0x8787878a00000000;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffffff6;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffff6;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_op2[2]) = 0xc6c6c6c68787878a;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8787878a00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffe3;
+  *((unsigned long*)& __m256i_result[2]) = 0x63636344c3c3c4f6;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffc3;
+  *((unsigned long*)& __m256i_result[0]) = 0xc3c3c500fffffff6;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00b200b300800080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00b200b300800080;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_result[2]) = 0x009200f200840080;
+  *((unsigned long*)& __m256i_result[1]) = 0x00b200b300800080;
+  *((unsigned long*)& __m256i_result[0]) = 0x00b200b300800080;
+  __m256i_out = __lasx_xvmaddwev_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000003f7e3f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff874dc687870000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x41dfffc000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x41dfffdfffc00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0001fbf9fbe29f52;
+  *((unsigned long*)& __m256i_op2[2]) = 0x5b409c0000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0001fbf9fbe29f52;
+  *((unsigned long*)& __m256i_op2[0]) = 0x5b409c0000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfbba01c0003f7e3f;
+  *((unsigned long*)& __m256i_result[2]) = 0xffc6cc05c64d960e;
+  *((unsigned long*)& __m256i_result[1]) = 0xfbd884e7003f7e3f;
+  *((unsigned long*)& __m256i_result[0]) = 0xff874dc687870000;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000627;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000627;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000003fff3fff;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fff3fff3fff4000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000403f3fff;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe01fc01fe01fc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x012c002c001c0006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe01fc01fe0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x012c002c001c000a;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x807e80fd80fe80fd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80938013800d8002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x807e80fd80fe0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80938013800d0005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffff00001fff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffff00001fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x807e80fd80fe80fd;
+  *((unsigned long*)& __m256i_result[2]) = 0x80938013800d8002;
+  *((unsigned long*)& __m256i_result[1]) = 0x807e80fd80fe0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x80938013800d0005;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0808080808080808;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff8fff8fff8fff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff8fff8fff8fff8;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000045f3fb;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000045f3fb;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7f7f7f7f7f7f7f7;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_op1[2]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_op1[0]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_result[2]) = 0x556caad9aabbaa88;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000004a557baac4;
+  *((unsigned long*)& __m256i_result[0]) = 0x556caad9aabbaa88;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00010003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00010003;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000404040;
+  __m256i_out = __lasx_xvmaddwev_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001a00;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff7f7f7fff7fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff7f7f7fff7fffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3f7f7f7eff800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3f7f7f7eff800000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff80ff00ff80ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff80ff00ff80ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x007f00ff007f00fe;
+  *((unsigned long*)& __m256i_op2[2]) = 0xf711ee11f711ee91;
+  *((unsigned long*)& __m256i_op2[1]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xf711ee11f711ee11;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff80ff00ff80ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff80ff00ff80ff01;
+  __m256i_out = __lasx_xvmaddwev_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffeffffffdd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x002affaa00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffeffffffdd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffdc;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff00ff00ef32;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ff00ff00ef32;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001f0000001f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001f0000ffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000060008;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000000c005b;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfffffffffffe0000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000040053;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff0007fff7;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff005affa4;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffe100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000053ffac;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01fffffffe000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01fffffffe000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x01fffffffe000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x01fffffffe000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfe00000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op2[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40effc0000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40effc0000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00007f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00010003fc827a86;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00007f7f7f7f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f017fc0ddbf7d86;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00153f1594ea02ff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000ffffffff0100;
+  *((unsigned long*)& __m256i_op2[0]) = 0xff15c1ea95ea02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xc06e7c817f7e8081;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000bd3f016f177a;
+  *((unsigned long*)& __m256i_result[1]) = 0xc06e7c8100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x60c485800178147a;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_op1[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op1[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_result[1]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffefffefffefffef;
+  __m256i_out = __lasx_xvmaddwev_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000045ff740023;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000045ff740023;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffffffffdd97dc4;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffffffffdd97dc4;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010100f10100fd4;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000a00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000010000000a;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001000b000b;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001000b000b;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff810011;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x817f11ed81800ff0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x817f11ed81800ff0;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000f7f8f7f8;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000003f78;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000f7f8f7f8;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000003f78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x805f0000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000f7f8f7f8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000003f78;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000f7f8f7f8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000003f78;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000e0e0e0e0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x5fff5fff607f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000420080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x5fff5fff607f0000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000900000009;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000009;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fc38fc38;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfc00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002001800ff0078;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01f8007001f80070;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0002001800ff0078;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01f8007001f80070;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0218ff78fc38fc38;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfc00000000000048;
+  *((unsigned long*)& __m256i_result[3]) = 0x00300b40fc001678;
+  *((unsigned long*)& __m256i_result[2]) = 0xfc00000000001f80;
+  *((unsigned long*)& __m256i_result[1]) = 0x00300b40fc001678;
+  *((unsigned long*)& __m256i_result[0]) = 0xfc00000000001f80;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000011f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000011f;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000192540;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000192540;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000fffe00800022;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000fffe00800022;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007f7f7f80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007f7f7f80;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000e0000000e00;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100004300000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100004300000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op2[2]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op2[0]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[0]) = 0x01ffff4300ffff00;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op2[3]) = 0x2020080800000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000004044f4f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0ef11ae55a5a6767;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_result[2]) = 0x6040190d20227a78;
+  *((unsigned long*)& __m256i_result[1]) = 0x132feeabd2d33b38;
+  *((unsigned long*)& __m256i_result[0]) = 0x6040190d00000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0004000f00100003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000400030010000f;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffbfffcffeffff0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffbfffcffeffff0;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00000000ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00000000ffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000ffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000ffffff;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000001000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0607ffff0607;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffdbbbcf;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffb8579f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffdbbbcf;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffb8579f;
+  __m256i_out = __lasx_xvmaddwev_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000101000001010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000101000001010;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfefefefe3f800000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000fe0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000fe0000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1fa0000000080000;
+  __m256i_out = __lasx_xvmaddwev_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op0[2]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op0[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op0[0]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[2]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[0]) = 0xebebebebebebebeb;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff88ff88;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0200000002000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0200000002000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff01fb0408;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff01fb0408;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00003cfc0000006f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000008000000080;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00003cfc0000006f;
+  *((unsigned long*)& __m256i_result[3]) = 0x02007f8002000400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000c5dc02005f64;
+  *((unsigned long*)& __m256i_result[1]) = 0x02007f8002000400;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000c5dc02005f64;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000000155b200;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000b70000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000016e00;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000118;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000118;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvmaddwev_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff80000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op2[2]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op2[0]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffdf5b000041b0;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffeffff97a1;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffdf5b000041b0;
+  __m256i_out = __lasx_xvmaddwev_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000001;
+  __m256i_out = __lasx_xvmaddwev_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001f001fffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffe0ffe000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001f001fffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffe0ffe000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000001e001e001e0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000001e001e001e0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000700020004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000700020004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0040000000000003;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000070002000a;
+  __m256i_out = __lasx_xvmaddwev_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe8440000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffe8440000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffe8440000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffe8440000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffe8440000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffe8440000;
+  __m256i_out = __lasx_xvmaddwev_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwev_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000f9bb562f56c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000b0cfffff4f3;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000f9bb562f56c80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op2[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op2[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0018761ed60b5d7f;
+  *((unsigned long*)& __m256i_result[2]) = 0xabdcdc9938afafe9;
+  *((unsigned long*)& __m256i_result[1]) = 0x0018761ed60b5d7f;
+  *((unsigned long*)& __m256i_result[0]) = 0xabdcdc9938afafe9;
+  __m256i_out = __lasx_xvmaddwev_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000ffff8000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff80008000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800080008000b8f1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x074132a240000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000ffff8000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x06f880008000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x800080008000b8f1;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe02fe02fee5fe22;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff49fe4200000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffff00fe81;
+  *((unsigned long*)& __m256i_result[0]) = 0xfe808d00eefffff8;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000f3280000dfff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff01fd7fff7fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007fff7fff7fff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000f3280000dfff;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbff0000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x3ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0ff80100ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0ff80100ffffffff;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff90ffffff80;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff90ffffff80;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffecffffffec;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffecffffffec;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0002fffeffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0002fffeffff;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000505;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf800f800f800c000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf800f800f800a000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff00ffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0001000100010000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x020afefb08140000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0003fffc00060000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf800f7fff8ffc0ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xf8fff7fff7ffa000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf800f800f800e000;
+  *((unsigned long*)& __m256i_result[0]) = 0xf800f800f800e000;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefdfffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x4);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffffefdfffffefd;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000100;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffff7d80000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000100;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8c80;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffe40;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff8c80;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffe40;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000040002;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000040002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000001000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000001000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xce7ffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xce7ffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x6300000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_result[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_result[0]) = 0x7ff0000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000080000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000080000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000016600000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000016600000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000016600000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000016600000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000003ff000003ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefe00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op2[2]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op2[0]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000017e;
+  __m256i_out = __lasx_xvmaddwod_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op2[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op2[1]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op2[0]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[2]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01ff01ff01ff01;
+  *((unsigned long*)& __m256i_result[0]) = 0xff01ff01ff01ff01;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x34000000fff00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff6e00000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3380000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x363c0000fff3c000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffb7146213;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffc1e0049;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffb71c413b;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf3317da580000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x34000000fff00000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff6e00000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3380000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x363c0000fff3c000;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000000c0;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000012481e4950;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000001658166830;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000004000000040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004000000040;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f3fc6c68787;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f3f87870000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffefffffffeff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003e3ec6c68686;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fffffeff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003e3e87870000;
+  __m256i_out = __lasx_xvmaddwod_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000089;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000627;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000627;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1f60000000c00000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1f60000000c00000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7fff7fff05407fff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000627;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000627;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[0]) = 0xf7f7f7f7f7f7f7f7;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3fffffffff7f0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3fffffffff7f0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000c7aff7c00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000c7aff7c00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000002030000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x030303670101fd90;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000002030000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x030303670101fd90;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3ffffffffc7bfc99;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3ffffffffc7bfc99;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xff80000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffdc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffeffffffdd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffdc;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x5);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000001fdfffffe02;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000001fefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff01fefffeff02;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000001fdfffffe02;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000001fefe;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff01fefffeff02;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x40efffe000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x40efffe000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000ff7fff7f;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000ff7f027f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000ff7f0100;
+  *((unsigned long*)& __m256i_op2[0]) = 0xff00fe00fe7f027f;
+  *((unsigned long*)& __m256i_result[3]) = 0x40efffe09fa88260;
+  *((unsigned long*)& __m256i_result[2]) = 0x6b07ca8e013fbf01;
+  *((unsigned long*)& __m256i_result[1]) = 0x40efffe09fa7e358;
+  *((unsigned long*)& __m256i_result[0]) = 0x80ce32be3e827f00;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op2[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_result[3]) = 0x1031146010201020;
+  *((unsigned long*)& __m256i_result[2]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_result[1]) = 0x1031146010201020;
+  *((unsigned long*)& __m256i_result[0]) = 0x1020102010201020;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffdfffffffdffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffddffdeffb5ff8d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffdfffffffdffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffddffdeffb5ff8d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffeeffaf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffeeffaf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1010100f10100fd4;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfffdfffffffdffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffddffdeffb5ff8d;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfffdfffffffdffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffddffdeffb5ff8d;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffefffcffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0febedc9bb95dd8f;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffefffcffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0febedc9bb95dd8f;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000b8f81b8c850f4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000b8f81b8c850f4;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000b8f81b8c850f4;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000b8f81b8c850f4;
+  *((unsigned long*)& __m256i_result[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_result[2]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_result[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_result[0]) = 0x000b2673a90896a4;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000545400;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000545400;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffff040000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fe;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000017bfffff0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000180007fe8;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff7bfffff1;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff80007fe9;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff7bfffff1;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff80007fe9;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000064;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc03fffffffc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffc00000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc03fffffffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffc00000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc600000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xc03ae000ffff6000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc600000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_op0[2]) = 0x019d00a2003a0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_op0[0]) = 0x019d00a2003a0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_result[2]) = 0x019d00a20039fff9;
+  *((unsigned long*)& __m256i_result[1]) = 0x01fe007a01c40110;
+  *((unsigned long*)& __m256i_result[0]) = 0x019d00a2003a0000;
+  __m256i_out = __lasx_xvmaddwod_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010511c54440437;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010511c54440437;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000103fca1bd;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000103fca1bd;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000103fca1bd;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000103fca1bd;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0010511c54440438;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0010511c54440438;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff000003c0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff000003c0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7c030000ffc4;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7c030000ffc4;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000fc300000fc40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x386000003df80000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0c6a240000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0c6a240000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000003cc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00f7000000f70006;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00f7000000f70006;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000080800000808;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000080800000808;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3d3d3d3d3d3d3d3d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3d3d3d3d3d3d3d3d;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x437f201f201f2020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x037f201f001f2020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x437f201f201f2020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x037f201f001f2020;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x21bb481000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01bf481000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x21bb481000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01bf481000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000006040190d;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000006040190d;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000006040190c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff9fbfe6f3;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000006040190c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff9fbfe6f3;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0200000202000002;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfff8fffffff8ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000f9f9f9f9;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000faf3f3f2;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff0007a861;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff0007a861;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000956a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000956a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000000000956a;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000000000956a;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000004efffe00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000057348fe3;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000057348fe3;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1f60010000080100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1f60010000080100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1f60010000080100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1f60010000080100;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000100080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe000ffffffffff;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x94d7fb5200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x94d7fb5200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000038ea4d4a;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7fff00007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x94d7fb5200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x94d7fb5200000000;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3dc02b400a003400;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3dc02b400a003400;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x3dc02b400a003400;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01c03f8034c03200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x3dc02b400a003400;
+  *((unsigned long*)& __m256i_op2[3]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op2[2]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x07fee332883f86b0;
+  *((unsigned long*)& __m256i_op2[0]) = 0x07fed3c8f7ad28d0;
+  *((unsigned long*)& __m256i_result[3]) = 0x01ce3c0050d32d40;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fadafc013acf600;
+  *((unsigned long*)& __m256i_result[1]) = 0x01ce3c0050d32d40;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fadafc013acf600;
+  __m256i_out = __lasx_xvmaddwod_w_hu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000010000685e;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000020a4ffffbe4f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000ffffff1dff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffff1dffffff1dff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000ffffff1dff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffff1dffffff1dff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff8001ffff0001;
+  __m256i_out = __lasx_xvmaddwod_w_hu_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfbff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfbff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfbff0000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfbff0000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffff010100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0101010101010110;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0101010101010110;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_d_wu_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x003fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_q_du(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020000010201;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000020000010201;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000020000010201;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000020000010201;
+  __m256i_out = __lasx_xvmaddwod_h_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffeffed;
+  *((unsigned long*)& __m256i_op2[3]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xc039000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_result[2]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_result[1]) = 0xbf3ffffffffeffed;
+  *((unsigned long*)& __m256i_result[0]) = 0xbf3ffffffffeffed;
+  __m256i_out = __lasx_xvmaddwod_h_bu(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000002780;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmaddwod_w_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00020001ffb6ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0049004200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000f3280000dfff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffb7;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004c00000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000003fb000003fb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000003fb000003fb;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00080000000cc916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000006fff3;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00f8000000f41bfb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000fa0106;
+  __m256i_out = __lasx_xvdiv_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1fe01e0100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1fe01e0100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1fe01e0100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1fe01e0100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6300000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9cffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x3f2c678e38d1104c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff9fffffffbffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffdaaaaffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000017e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202810102020202;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202810102020202;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000003f;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x01fe8001b72e0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xb72e8001b72eaf12;
+  *((unsigned long*)& __m256i_op0[1]) = 0x01fe000247639d9c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb5308001b72eaf12;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffb7ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_result[1]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_result[0]) = 0xffb5ff80ffd0ffd8;
+  __m256i_out = __lasx_xvdiv_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8091811081118110;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80a6802680208015;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8091811081110013;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80a6802680200018;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8091811081118110;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80a6802680208015;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8091811081110013;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80a6802680200018;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvdiv_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffba0c05;
+  *((unsigned long*)& __m256i_op1[3]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_op1[1]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5353535353535353;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0303030303020000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0303030303020000;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000d000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000d000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000583800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000583800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000100000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d0000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000045;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000d0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000013b13380;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000013b13380;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdededededededede;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffa080000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffe080000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffa080000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffe080000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffdfffdfffdfffd;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0100010001000100;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0010002000100020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000fd00ffff02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001fffeff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001fffe0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001fffe00010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff81ffffff00;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffff30000000b;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffff30000000b;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff3fffffff3;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0010000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff827f80;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0226823c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff827f80;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0226823c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000007fef;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007fef;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000007fee;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffff00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000e2e20000e2e2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00011d1c00011d9c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000e2e20000e2e2;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00011d1c00011d9c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000e2e20000e2e2;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00011d1c00011d9c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000e2e20000e2e2;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00011d1c00011d9c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe2e2e202ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe2e2e202ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000465;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvdiv_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000003f;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x007d003e007d003e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007d003effa80010;
+  *((unsigned long*)& __m256i_op1[1]) = 0x007d003e007d003e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007d003effa80010;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1b1a191817161514;
+  *((unsigned long*)& __m256i_op1[1]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1b1a191817161514;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000fe000000fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000fe000000fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000fe000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000fe000000fe;
+  __m256i_out = __lasx_xvdiv_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01ffff4300ffff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000008000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000008000000100;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x1f831f80e0e09f86;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x1f831f80e0e09f86;
+  __m256i_out = __lasx_xvdiv_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000030b8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9fe7fffffffff32e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6040190ddfdd8587;
+  *((unsigned long*)& __m256i_op1[1]) = 0xecd011542d2cc4c7;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6040190dffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x7f7fff7f7f7fff7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f7fff7f7f7fff7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f7fff7f7f7fff7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7fff7f7f7fff7f;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvdiv_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000101;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001010000;
+  __m256i_out = __lasx_xvdiv_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010202020203;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010201010102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010202020203;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010201010102;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff0fffffff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010202020203;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010201010102;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010202020203;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010201010102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvdiv_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001fffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffffe;
+  __m256i_out = __lasx_xvdiv_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff000000010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000095120000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc9da000063f50000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc7387fff6bbfffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvmod_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffe06df8d7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffbe8b470f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffe06df0d7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ffffffffffff7ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffbe8b470f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000800;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000010100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000800080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc9d8080067f50020;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc70000020000c000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000010100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000001000100;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff3cff3cff3cff3c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1e18000000000000;
+  __m256i_out = __lasx_xvmod_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff6fff6fff6fff6;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1fffffff1fffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0383634303836343;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1fffffff1fffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0383634303836343;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001000000;
+  __m256i_out = __lasx_xvmod_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1c1b1a191c1b1a19;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007f8000007f80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000001400000014;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff9fff9fff9fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff90000fff9fff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x108659e46485f7e1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4df5b1a3ed5e02c1;
+  *((unsigned long*)& __m256i_result[3]) = 0x081abb9d36ee1037;
+  *((unsigned long*)& __m256i_result[2]) = 0x1617eb17129bfd38;
+  *((unsigned long*)& __m256i_result[1]) = 0x081abb9d36ee1037;
+  *((unsigned long*)& __m256i_result[0]) = 0x1617eb17129bfd38;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffefffefffefffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8001b72e0001b72e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8001b72eaf12d5f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000247639d9cb530;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8001b72eaf12d5f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_result[3]) = 0xff81ffe50001ffe5;
+  *((unsigned long*)& __m256i_result[2]) = 0xff81ffe5ffa6ffc6;
+  *((unsigned long*)& __m256i_result[1]) = 0x000200aafe9affe5;
+  *((unsigned long*)& __m256i_result[0]) = 0xff81ffe5ffa6ffc6;
+  __m256i_out = __lasx_xvmod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[2]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[0]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0909090909090909;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfe8bfe0efe8bfe12;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfe8bfe0efe8bfe12;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c007c007c007c00;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007efeff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001010000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007efeff00;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000008e7c00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000067751500;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000008e7c00;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000067751500;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x80008000b70fb810;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3c0f3c0f3911b910;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80008000b70fb810;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3c0f3c0f3911b910;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff6f20;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000781e0000f221;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff6f20;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000781e0000f221;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffe000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffe000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000e000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000e000;
+  __m256i_out = __lasx_xvmod_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_hu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[3]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_result[1]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffefffefffefffef;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8e8e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7171717171717171;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8e8e8e8e8e8e8e8e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc800c800c800c800;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8800c800c800c801;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_op1[3]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_op1[1]) = 0xc848c848c848c848;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8848c848c848c848;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01010101010101c9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000005500000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001005500020000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000005500000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001005500020000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000100010001fffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000100010001fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000005500000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000005400000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000005500000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000005400000002;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_du(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fe36364661af18f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fe363637fe36363;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101000101010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000101010001;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0001000e0001000e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc3030000ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xff800000ff800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc3030000ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000000000;
+  __m256i_out = __lasx_xvmod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007f0000ff807f81;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffff800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007f0000ff807f81;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff0000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff8000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff8000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000000c;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmod_wu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000200000002;
+  __m256i_out = __lasx_xvmod_bu(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000800080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc9d8080067f50020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc70000020000c000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf000f00000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000f000f0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf0f008000ff5000f;
+  *((unsigned long*)& __m256i_result[0]) = 0xf00000020000f000;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000200000022;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0049004200000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000007f00000022;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000007f00000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000fff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op0[2]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op0[1]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op0[0]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffc00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffc00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffc00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffc00000000;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000399400003994;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000fff00000fff;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff605a;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000003ffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000003ffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000003ffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000003ffffffffff;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x29);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x34);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffefffffefd;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001175f10e4330e8;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff8f0842ff29211e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffff8d9ffa7103d;
+  *((unsigned long*)& __m256i_result[3]) = 0x001175f10e4330e8;
+  *((unsigned long*)& __m256i_result[2]) = 0xff8f0842ff29211e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffff8d9ffa7103d;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x39);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x30);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfebdff3eff3dff52;
+  *((unsigned long*)& __m256i_result[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0xffc0ffc0ffc0ffc0;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2475cef801f0ffdd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x419cd5b11c3c5654;
+  *((unsigned long*)& __m256i_result[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_result[2]) = 0x2475cef801f0ffdd;
+  *((unsigned long*)& __m256i_result[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_result[0]) = 0x419cd5b11c3c5654;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x3f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x3f3f3f3f3f3f3f3f;
+  *((unsigned long*)& __m256i_result[2]) = 0x3f3f3f3f3f3f3f3f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000003f3f3f3f;
+  *((unsigned long*)& __m256i_result[0]) = 0x3f3f3f3f00000000;
+  __m256i_out = __lasx_xvsat_bu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000003fff3fff;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00c200c200c200c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00c200c200c200bb;
+  *((unsigned long*)& __m256i_result[3]) = 0x007fffff007fffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x007fffff007fffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x007fffff007fffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x007fffff007fffff;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc2c2c2c2c2c2c2c2;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffe000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe000000000000;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x31);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_bu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00002df900001700;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffe05ffffe911;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00002df900001700;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffe05ffffe911;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffcfffffffc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000300000003;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffcfffffffc;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvsat_bu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[2]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[1]) = 0x001f001f001f001f;
+  *((unsigned long*)& __m256i_result[0]) = 0x001f001f001f001f;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfe00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000017f7f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000017f7f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f00000000000000;
+  __m256i_out = __lasx_xvsat_bu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001ffff0001ffff;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000080000001000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000080000001000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000080000000800;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000f0000000f;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffdd97dc4;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffdd97dc4;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffdd97dc4;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff0001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffdd97dc4;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000b8f81b8c840e4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000b8f81b8c840e4;
+  *((unsigned long*)& __m256i_result[3]) = 0x000007ff000007ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000007fffffff800;
+  *((unsigned long*)& __m256i_result[1]) = 0x000007ff000007ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000007fffffff800;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0xb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x12);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001fff00001fff;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000001ffffffff;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x21);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0007ffff0007ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0007ffff0007ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x00071f1f00071f1f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000700000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x00071f1f00071f1f;
+  __m256i_out = __lasx_xvsat_bu(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x3d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op0[1]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000f0000000f;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x11);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000001ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000001ff;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fe000000000;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003f003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000003f003f;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000e000e000e000e;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000014402080144;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000007f007f007f;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc0090000c0200060;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc0090000c0200060;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f0000007f0060;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256i_op0[2]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3eab77367fff4848;
+  *((unsigned long*)& __m256i_op0[0]) = 0x408480007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_result[2]) = 0x0003000300030000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003000300030003;
+  *((unsigned long*)& __m256i_result[0]) = 0x0003000300030000;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffcfffc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffcfffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000003fff;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x1c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000007fffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000007fffff;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000029170;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001fff000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000029170;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000203ff;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000017f00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00007f7f03030000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007f7f03030000;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x37);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7efefefe80ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0fffffff0fffffff;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe000ffffffff08;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe000ffffffff08;
+  *((unsigned long*)& __m256i_result[3]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0fffffff0fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0fffffff0fffffff;
+  __m256i_out = __lasx_xvsat_wu(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0fffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0fffffffffffffff;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x3c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff8;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000002c21ffeff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc0000000c0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000002c21ffeff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc0000000c0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff8;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_w(__m256i_op0,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000d6d6d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000d6d6d;
+  __m256i_out = __lasx_xvsat_bu(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_hu(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000003fffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000003fffff;
+  __m256i_out = __lasx_xvsat_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_d(__m256i_op0,0x32);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00003fea00013fec;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003fe50001c013;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00003fea00013fec;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003fe50001c013;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff0000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff0000ff00;
+  __m256i_out = __lasx_xvsat_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsat_du(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00d6c1c830160048;
+  *((unsigned long*)& __m256i_op0[1]) = 0x36722a7e66972cd6;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe3aebaf4df958004;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8b1414140e0e0e0e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x36722a7e66972cd6;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffff8046867f79;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6651bfff80000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000468600007f79;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000f3280000dfff;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001010101;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff5f5c;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[0]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[3]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_result[2]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_result[1]) = 0x005500550055ffab;
+  *((unsigned long*)& __m256i_result[0]) = 0x005500550055ffab;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f7f7f7f7f7f7f7f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000007f00340040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000007f000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffec;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffebd8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffec;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffebd8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffec;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffec;
+  __m256i_out = __lasx_xvexth_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffec;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffec;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000045f3fb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000045f3fb;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000004500f300fb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000004500f300fb;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00080000002c0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0008000000080000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00080000002c0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0008000000080000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00080000002c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00080000002c0000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvexth_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000fffe;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x004100df00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00c000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x004100df00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00c000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffa30000165a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000104000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffa30000165a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000104000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000165a;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffa3;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000165a;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001010600000106;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001010600000106;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x007f00ff007f00ff;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000007f007f007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f0000007f0060;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002000200010002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000200010002;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007cfcfd80000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007cfcfd80000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fff8579f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff010ff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff010ff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff00ff;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000020ff790020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000020ff790020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ff03fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffec75c2d209f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ff03fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffec75c2d209f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000001ff000003fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000001ff000003fe;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvexth_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000a400ff004f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000005e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000005e;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffff1cffffff1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff1cffffff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffff1cffffff1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff1cffffff1c;
+  __m256i_out = __lasx_xvexth_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff0000;
+  __m256i_out = __lasx_xvexth_qu_du(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000010100000101;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_q_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0080000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000ff;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000005ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000007ffffffce;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvexth_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvexth_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x639c3fffb5dffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb8c7800094400001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0063009c003f00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00b500df00ff00fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x00b800c700800000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0094004000000001;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fe;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000f6ff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6100000800060005;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5ee1c073b800c916;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5ff00007fff9fff3;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000005f000000f0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000f9;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000f3;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffffefd;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fd;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000017f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000017f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000017f;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000002e0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000002e0000002e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000002e0000fffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000002e;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000002e;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000002e;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000fffe;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff00fff0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff00fffffff0;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2b2b2b2b1bd5d5d6;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2a2a2a2af2d5d5d6;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2b2b2b2b1bd5d5d6;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2a2a2a2af2d5d5d6;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002a0000002a;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002a0000002a;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffff2ffffffd5;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffd5ffffffd6;
+  __m256i_out = __lasx_vext2xv_w_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcfee0fe00ffe0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffcfee0fe00ffe0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffc0000fee0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fe000000ffe0;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xd100645944100004;
+  *((unsigned long*)& __m256i_op0[2]) = 0xd1908469108400d1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000404040104;
+  *((unsigned long*)& __m256i_op0[0]) = 0xd1108199714910f9;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000004040104;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffd1108199;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000714910f9;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000001b0000001b;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001b00fd0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000001b;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000001b;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000001b;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000fd00000000;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000801380f380fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000801380f300fb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008013;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000080f3;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fb;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000483800;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001700170017;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000017;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00020002ff820002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00020002ff820002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffff82;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000017f7f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000017f7f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000017f00007f7f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff0000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff0000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff00000000ff;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff7fff7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff7f027f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff7f0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fe00fe7f027f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000007f;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000007f;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfff0fff0ff01ff01;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff0fff0fff0fff0;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffffffffff0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffffffffff0;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000aaabffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00aa00ab00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00aa00ab00ff00ff;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_w_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000008000;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000ff;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000003fbfc04;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001fdfe02;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000003fbfc04;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001fdfe02;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fd;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000064;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffe20;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001dfffffe1f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x01ff01ff01c0003e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01ff01ff01c0003e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000100ff000100ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000100c00000003e;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000f0001000f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000f0001000d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000f0001000f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000f0001000d;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000010000000f;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000010000000f;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000010000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000010000000d;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0200000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0200000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000020000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000200000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000029;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000029;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000029;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000020000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffefff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ef;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000080;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000000;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x03fbfffc03fc07fc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x03fbfffc03fc07fc;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000498000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x00004843ffffffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000498000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000684000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000498000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x000048430000ffe0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000498000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000684000000000;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_w_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000080;
+  __m256i_out = __lasx_vext2xv_hu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000002;
+  __m256i_out = __lasx_vext2xv_wu_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_du_bu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_vext2xv_du_wu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_h_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_vext2xv_d_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff7eddffff7ed3;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff7edfffff7edf;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff7eddffff7ed3;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff7edfffff7edf;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00007edd;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00007ed3;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff00007edf;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00007edf;
+  __m256i_out = __lasx_vext2xv_wu_hu(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_vext2xv_d_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op0[2]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op0[1]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op0[0]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff605a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff39ffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0202810102020202;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0202810102020202;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fefe0000fefe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff0000fefe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fefe0000fefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff0000fefe;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000017547fffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000017547fffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x807e80fd80fe80fd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80938013800d8002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x807e80fd80fe0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80938013800d0005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000801380f380fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000801380f300fb;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000fffd5a98;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000101ff01;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffee;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff80ff00ff80ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff80ff00ff80ff01;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fd;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000000010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff000000010000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3880800037800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3901000039010000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3880800037800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x3901000039010000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fc00000428a;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffc040ffffc09d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fc00000428a;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffefffefffeffee;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe0000fffe0012;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000001ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000001ffff;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80be0000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80be0000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000100000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffff00000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_op1[3]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_op1[1]) = 0xdf80df80df80df80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xdfc2df80df80df87;
+  *((unsigned long*)& __m256i_result[3]) = 0x2080208020802080;
+  *((unsigned long*)& __m256i_result[2]) = 0x203e208020802079;
+  *((unsigned long*)& __m256i_result[1]) = 0x2080208020802080;
+  *((unsigned long*)& __m256i_result[0]) = 0x203e208020802079;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffe05f8102;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007fffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000004e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffba8300004fc2;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffba8300004fc2;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x004100df00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00c000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x004100df00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00c000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_result[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_result[0]) = 0xc1d75053f0000000;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffa30000165a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000104000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffa30000165a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000104000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x41dfffffffc00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xc1d75053f0000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xbe21000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000505300000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xbe21000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000505300000000;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000001880310877e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000001880310877e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000f788f788;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff6361;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4d0a902890b800dc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000001faf19b60;
+  *((unsigned long*)& __m256i_op1[2]) = 0x6c2905ae7c14c561;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000001faf19b60;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6c2905ae7c14c561;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x94d7fb5200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x94d7fb5200000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb664007ffd61;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97a1df5b41b0;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb664007ffd61;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97a1df5b41b0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000180;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvsigncov_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8282828282828282;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8768876887688769;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000003fffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffc00040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffc00040;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffdbff980038ffaf;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffafffe80004fff1;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffdbff980038ffaf;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffafffe80004fff1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffc;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000fffd0003;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0002fffd;
+  __m256i_out = __lasx_xvsigncov_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x3922d40000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000c85221c0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7ebfab800000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000f20;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000009f0;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x40d74f979f99419f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000022;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010100000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010100000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5980000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000040;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1f9d9f9d1f9db29f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1f9d9f9d201cb39e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x201c9f9d201cb29f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1f9d9f9d201cb39e;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007773;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000003373;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc080ffff0049ffd2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0049ffd2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffeffb9ff9d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00630064004bffd0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe0f02081c1c4ce2c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8008000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe0f02081c1c4ce2c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8008000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000b8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000b8;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000003;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fffc0001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000022;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000022;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010200000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0a0a0a0a7f0a0a0a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000088;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000088;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x296e000018170000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x296e000018170000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000404;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000404;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffc000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffeff000c057c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffc000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffeff000c057c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000f0f0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000f0f0;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffb2f600006f48;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000008c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000008c;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff801000000010;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800300000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000cc;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000cc;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x5);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000055;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000054;
+  __m256i_out = __lasx_xvmskltz_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskltz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff00;
+  __m256i_out = __lasx_xvmskgez_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvmskgez_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskgez_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmskgez_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000fafe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000fafe;
+  __m256i_out = __lasx_xvmskgez_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000010100000101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvmskgez_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0020002000400040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000005555;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000005555;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004411;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004411;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000033;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000033;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000008050501;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000f91;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000f91;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000001f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000001f;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x006018000000001a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0060401900000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x006018000000001a;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0060401900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000006170;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000006170;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf2b180c9fc1fefdc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000002ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000002ff;
+  __m256i_out = __lasx_xvmsknz_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvldi(-4080);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0xfebcfebcfebcfebc;
+  *((unsigned long*)& __m256i_result[2]) = 0xfebcfebcfebcfebc;
+  *((unsigned long*)& __m256i_result[1]) = 0xfebcfebcfebcfebc;
+  *((unsigned long*)& __m256i_result[0]) = 0xfebcfebcfebcfebc;
+  __m256i_out = __lasx_xvldi(1724);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fd1000000000000;
+  __m256i_out = __lasx_xvldi(-943);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xff1cff1cff1cff1c;
+  __m256i_out = __lasx_xvldi(1820);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7200000072000000;
+  __m256i_out = __lasx_xvldi(-3214);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0xffffff1dffffff1d;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff1dffffff1d;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffff1dffffff1d;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff1dffffff1d;
+  __m256i_out = __lasx_xvldi(2845);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000001000000010;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001000000010;
+  __m256i_out = __lasx_xvldi(-4080);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fd1000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fd1000000000000;
+  __m256i_out = __lasx_xvldi(-943);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_result[3]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7200000072000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7200000072000000;
+  __m256i_out = __lasx_xvldi(-3214);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-mem.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-mem.c
new file mode 100644
index 00000000000..022b2e7ec4a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-mem.c
@@ -0,0 +1,147 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_result[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_result[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_result[0]) = 0x0ad152a5ad72feeb;
+  __m256i_out = __lasx_xvldx((unsigned long *)&__m256i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0;
+  __lasx_xvstx(__m256i_op0, (unsigned long *)&__m256i_result, 0x0);
+  ASSERTEQ_64(__LINE__, __m256i_op0, __m256i_result);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[2]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[1]) = 0xebebebebebebebeb;
+  *((unsigned long*)& __m256i_result[0]) = 0xebebebebebebebeb;
+  __m256i_out = __lasx_xvldrepl_b((unsigned long *)&__m256i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0xfeebfeebfeebfeeb;
+  *((unsigned long*)& __m256i_result[2]) = 0xfeebfeebfeebfeeb;
+  *((unsigned long*)& __m256i_result[1]) = 0xfeebfeebfeebfeeb;
+  *((unsigned long*)& __m256i_result[0]) = 0xfeebfeebfeebfeeb;
+  __m256i_out = __lasx_xvldrepl_h((unsigned long *)&__m256i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0xad72feebad72feeb;
+  *((unsigned long*)& __m256i_result[2]) = 0xad72feebad72feeb;
+  *((unsigned long*)& __m256i_result[1]) = 0xad72feebad72feeb;
+  *((unsigned long*)& __m256i_result[0]) = 0xad72feebad72feeb;
+  __m256i_out = __lasx_xvldrepl_w((unsigned long *)&__m256i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[2]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[0]) = 0x0ad152a5ad72feeb;
+  __m256i_out = __lasx_xvldrepl_d((unsigned long *)&__m256i_op0, 0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0;
+  *((unsigned long*)& __m256i_result[0]) = 0x8d;
+  *((unsigned long*)& __m256i_out[3]) = 0x0;
+  *((unsigned long*)& __m256i_out[2]) = 0x0;
+  *((unsigned long*)& __m256i_out[1]) = 0x0;
+  *((unsigned long*)& __m256i_out[0]) = 0x0;
+  __lasx_xvstelm_b(__m256i_op0, (unsigned long *)&__m256i_out, 0x0, 0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0;
+  *((unsigned long*)& __m256i_result[0]) = 0x9100;
+  *((unsigned long*)& __m256i_out[3]) = 0x0;
+  *((unsigned long*)& __m256i_out[2]) = 0x0;
+  *((unsigned long*)& __m256i_out[1]) = 0x0;
+  *((unsigned long*)& __m256i_out[0]) = 0x0;
+  __lasx_xvstelm_h(__m256i_op0, (unsigned long *)&__m256i_out, 0x0, 0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0;
+  *((unsigned long*)& __m256i_result[0]) = 0xe9179100;
+  *((unsigned long*)& __m256i_out[3]) = 0x0;
+  *((unsigned long*)& __m256i_out[2]) = 0x0;
+  *((unsigned long*)& __m256i_out[1]) = 0x0;
+  *((unsigned long*)& __m256i_out[0]) = 0x0;
+  __lasx_xvstelm_w(__m256i_op0, (unsigned long *)&__m256i_out, 0x0, 0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x042f0500cfea969a;
+  *((unsigned long*)& __m256i_op0[2]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_op0[1]) = 0xa98d4f7a77c308ee;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0ad152a5ad72feeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0;
+  *((unsigned long*)& __m256i_result[2]) = 0x0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0;
+  *((unsigned long*)& __m256i_result[0]) = 0x58569d7be9179100;
+  *((unsigned long*)& __m256i_out[3]) = 0x0;
+  *((unsigned long*)& __m256i_out[2]) = 0x0;
+  *((unsigned long*)& __m256i_out[1]) = 0x0;
+  *((unsigned long*)& __m256i_out[0]) = 0x0;
+  __lasx_xvstelm_d(__m256i_op0, (unsigned long *)&__m256i_out, 0x0, 0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-perm.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-perm.c
new file mode 100644
index 00000000000..e599c562af9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-perm.c
@@ -0,0 +1,7730 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0cc08723ff900001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xcc9b89f2f6cef440;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x7);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff00000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((int*)& __m256_op0[7]) = 0xfffffff8;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000007f;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x4);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x2);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefdfffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x4);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff00000100;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5555555580000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5555555580000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x5);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002000400000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002000200020006;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x0);
+  *((unsigned long*)& __m256d_op0[3]) = 0xfffefffe00000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff8000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1f0fdf7f3e3b31d4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff8000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe01fc01fe01fc;
+  *((unsigned long*)& __m256i_op0[2]) = 0x012c002c001c0006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe01fc01fe0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x012c002c001c000a;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x807e80fd80fe80fd;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x5);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x0);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022be22be;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x5);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffff0100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff0100000001;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x7);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff0008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff0008;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x0);
+  *((unsigned long*)& __m256d_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000000d;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x0);
+  *((int*)& __m256_op0[7]) = 0x0000ff01;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0010001000100010;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000100010;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0010001000100010;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x2);
+  *((unsigned long*)& __m256d_op0[3]) = 0xffffffff00000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000100040;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000004843ffdff;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x07fee332883f86b0;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  long_int_out = __lasx_xvpickve2gr_d(__m256i_op0,0x0);
+  *((int*)& __m256_op0[7]) = 0xffffffff;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x5);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x4);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x1);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x0);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffd880;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffd880;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x03af03af03af03af;
+
+  int_op0 = 0x0000001b3c4c0a5c;
+  *((unsigned long*)& __m256i_result[3]) = 0x3c4c0a5c3c4c0a5c;
+  *((unsigned long*)& __m256i_result[2]) = 0x3c4c0a5c3c4c0a5c;
+  *((unsigned long*)& __m256i_result[1]) = 0x3c4c0a5c3c4c0a5c;
+  *((unsigned long*)& __m256i_result[0]) = 0x3c4c0a5c3c4c0a5c;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000397541c58;
+  *((unsigned long*)& __m256i_result[3]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_result[2]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_result[1]) = 0x97541c5897541c58;
+  *((unsigned long*)& __m256i_result[0]) = 0x97541c5897541c58;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[0]) = 0x0400040004000400;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000084;
+  *((unsigned long*)& __m256i_result[3]) = 0x0084008400840084;
+  *((unsigned long*)& __m256i_result[2]) = 0x0084008400840084;
+  *((unsigned long*)& __m256i_result[1]) = 0x0084008400840084;
+  *((unsigned long*)& __m256i_result[0]) = 0x0084008400840084;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000020202020;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000020006;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_b(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff000000ff;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  long_op0 = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000020006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000020006;
+  __m256i_out = __lasx_xvreplgr2vr_d(long_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplgr2vr_h(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  int_op0 = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplgr2vr_w(int_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000001b3c4c0a5c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffefb;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffefb;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000fe;
+  int_op1 = 0x0000000059815d00;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fe;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op0[0]) = 0x555555ab555555ab;
+  int_op1 = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[0]) = 0x555555ab555555ab;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000012e2110;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202020202020202;
+  *((unsigned long*)& __m256i_result[1]) = 0x1010101010101010;
+  *((unsigned long*)& __m256i_result[0]) = 0x1010101010101010;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  int_op1 = 0x0000000000000400;
+  *((unsigned long*)& __m256i_result[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[2]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[0]) = 0x003f003f003f003f;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f003f003f003f;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[2]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[1]) = 0x003f003f003f003f;
+  *((unsigned long*)& __m256i_result[0]) = 0x003f003f003f003f;
+  __m256i_out = __lasx_xvreplve_w(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000003f0000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe161616161614e60;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_result[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_result[1]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_result[0]) = 0xe161616161614e60;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  int_op1 = 0x00000000000000ac;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000080;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00d5007f00ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00ffffff00ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00d5007f00ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00ffffff00ffff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_w(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000020202020;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvreplve_w(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffff7fffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffff7fffff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc192181230000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc192181230000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000ff00ff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000ff00ff;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fef7fef7fef7fef;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fef7fef7fef7fef;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff00ffffffff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff0000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f010700c70106;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f010700c70106;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_result[2]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_result[1]) = 0x0106010601060106;
+  *((unsigned long*)& __m256i_result[0]) = 0x0106010601060106;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_h(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve_w(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000003fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000003fff;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x4040404040404040;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000404;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000404;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[2]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[1]) = 0x0404040404040404;
+  *((unsigned long*)& __m256i_result[0]) = 0x0404040404040404;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000800080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000202;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000202;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000202;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x000000003ddc5dac;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_d(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  int_op1 = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000200000002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000002;
+  int_op1 = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve_b(__m256i_op0,int_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_b(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvrepl128vei_b(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[0]) = 0x0400040004000400;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_d(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_b(__m256i_op0,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000000020000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000300000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_b(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000003f0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000030007;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000141020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000141020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_result[2]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_result[1]) = 0x1020102010201020;
+  *((unsigned long*)& __m256i_result[0]) = 0x1020102010201020;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_b(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_h(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1fa0000000080000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1fa0000000080000;
+  __m256i_out = __lasx_xvrepl128vei_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvrepl128vei_b(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffeffffff88;
+  *((unsigned long*)& __m256i_op0[2]) = 0x61e0000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffeffffff88;
+  *((unsigned long*)& __m256i_op0[0]) = 0x61e0000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffff80fe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xd52aaaaa555555ab;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffff80fe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xd52aaaaa555555ab;
+  *((unsigned long*)& __m256i_result[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_result[0]) = 0x555555ab555555ab;
+  __m256i_out = __lasx_xvreplve0_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_result[3]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[2]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[1]) = 0x8080808080808080;
+  *((unsigned long*)& __m256i_result[0]) = 0x8080808080808080;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fff3fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x3fff3fff3fff4000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000403f3fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x3fff3fff3fff3fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x3fff3fff3fff3fff;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000001;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000020202020;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0007fd00000f02ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001fffeff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00fe00feff02ff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00fe00feff02ff;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfc00ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfc00ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000100fe000100fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x00fe00fe00fe00fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00fe00fe00fe00fe;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x4040404040404040;
+  __m256i_out = __lasx_xvreplve0_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000781;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000064;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op0[2]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op0[0]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_result[3]) = 0x047a047a047a047a;
+  *((unsigned long*)& __m256i_result[2]) = 0x047a047a047a047a;
+  *((unsigned long*)& __m256i_result[1]) = 0x047a047a047a047a;
+  *((unsigned long*)& __m256i_result[0]) = 0x047a047a047a047a;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x037fe01f001fe020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x037fe01f001fe020;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202020;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0d0d0d0d0d0d0d0d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0d0d0d0d0d0d0d0d;
+  __m256i_out = __lasx_xvreplve0_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[3]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[2]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[1]) = 0x0202010202020102;
+  *((unsigned long*)& __m256i_result[0]) = 0x0202010202020102;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00ff00ff;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000001;
+  __m256i_out = __lasx_xvreplve0_d(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x800080ff800080ff;
+  __m256i_out = __lasx_xvreplve0_w(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvreplve0_q(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff97a2;
+  *((unsigned long*)& __m256i_result[3]) = 0x97a297a297a297a2;
+  *((unsigned long*)& __m256i_result[2]) = 0x97a297a297a297a2;
+  *((unsigned long*)& __m256i_result[1]) = 0x97a297a297a297a2;
+  *((unsigned long*)& __m256i_result[0]) = 0x97a297a297a297a2;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_h(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x2);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvreplve0_b(__m256i_op0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0005000500050005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000050005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffefe00000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000170017;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000017;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000170017;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fffffffe;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffefffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffefffffffe;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000040404040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff000200000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff000200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x001f00e0ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x001f00e0ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff80000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff000200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000200000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op0[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_op1[0]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[3]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[2]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[1]) = 0x9090909090909090;
+  *((unsigned long*)& __m256i_result[0]) = 0x9090909090909090;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000040b200002fd4;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007fff0000739c;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000040b200002fd4;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007fff0000739c;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000739c;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000ff;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff800080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff800000000000;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op1[2]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x04e8296f18181818;
+  *((unsigned long*)& __m256i_op1[0]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x6018000000000cd1;
+  *((unsigned long*)& __m256i_result[2]) = 0x6040190d00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x132feea900000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x6040190d00000000;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ff88ffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff78ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x03fbfffc03fc07fc;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x03fbfffc03fc07fc;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000ffff0000ffff;
+  __m256i_out = __lasx_xvinsve0_d(__m256i_op0,__m256i_op1,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff8001ffff0001;
+  __m256i_out = __lasx_xvinsve0_w(__m256i_op0,__m256i_op1,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x010180068080fff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvpickve_d(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickve_w(__m256i_op0,0x0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00fe01f000010000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000c40086;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000c40086;
+  __m256i_out = __lasx_xvpickve_d(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickve_d(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff820002ff820002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0002000200020002;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff820002ff820002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0002000200020002;
+  __m256i_out = __lasx_xvpickve_d(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvpickve_d(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickve_w(__m256i_op0,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickve_d(__m256i_op0,0x2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvpickve_w(__m256i_op0,0x3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvpickve_w(__m256i_op0,0x1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00555555553f8000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00555555553f8000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000030000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000030000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[0]) = 0x2020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020643100000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020643100000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000050504c4c2362;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000b2673a90896a4;
+  *((unsigned long*)& __m256i_result[3]) = 0xa90896a400000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xa90896a400000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f003f003f0040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f003f003f0040;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003f003f003f00;
+  *((unsigned long*)& __m256i_result[2]) = 0x4000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003f003f003f00;
+  *((unsigned long*)& __m256i_result[0]) = 0x4000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsll_v(__m256i_op0,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000007d0d0d0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000007d0d0d0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000007d0d0d00000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000007d0d0d00000;
+  __m256i_out = __lasx_xvbsrl_v(__m256i_op0,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fffffffe000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fffffffe000000;
+  __m256i_out = __lasx_xvbsrl_v(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000018803100188;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000018803100188;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvbsrl_v(__m256i_op0,0x15);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvbsrl_v(__m256i_op0,0x1b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x81f7f2599f0509c2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x51136d3c78388916;
+  *((unsigned long*)& __m256i_op1[3]) = 0x044819410d87e69a;
+  *((unsigned long*)& __m256i_op1[2]) = 0x21d3905ae3e93be0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x5125883a30da0f20;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6d7b2d3ac2777aeb;
+  *((unsigned long*)& __m256i_result[3]) = 0x000019410000e69a;
+  *((unsigned long*)& __m256i_result[2]) = 0xf259905a09c23be0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000883a00000f20;
+  *((unsigned long*)& __m256i_result[0]) = 0x6d3c2d3a89167aeb;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4f8000004f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4f7fffbf0000fe00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000004f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4f7fffe64f7fffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe02fe02fee5fe22;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff49fe4200000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffbf0000fe000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fe020000fe22;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe6fe42ffc00000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc06500550055ffab;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00550000ffab0001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00550000ffab0001;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000001000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000401000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000400000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000400000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000400000000;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x01fe01fe00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x01fe01fe01fe01fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x01fe01fe00000000;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ffffffffff;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000089;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000154dc84;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000089;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000200;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000200;
+  __m256i_out = __lasx_xvpackev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7f8000007f800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fc000007fc00000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0080010000800100;
+  *((unsigned long*)& __m256i_result[2]) = 0x00c0000000c00000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0080010000800100;
+  *((unsigned long*)& __m256i_result[0]) = 0x00c0000000c00000;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000001fdfffffe02;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000001fefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff01fefffeff02;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fd00ffff02ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000001fffeff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00fe00feff02ff;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8011ffee804c004c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00faff0500c3ff3c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x80f900f980780078;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0057ffa800ceff31;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffee0000004c0000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff050000ff3c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00f9000000780000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffa80000ff310000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001d0000001d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001d0000001d00;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffe20001dfe1f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003fe000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003fe000000000;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000100040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000100040;
+  unsigned_int_out = __lasx_xvpickve2gr_wu(__m256i_op0,0x6);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ff890000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff790000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ff890000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff790000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ff790000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ff790000;
+  __m256i_out = __lasx_xvpackev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x41dffbffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffff00ff800000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfbff0000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfbff0000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00000000000000;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffe7ffffffe7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000007b007e;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffe700000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffe7007b007e;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffe700000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffe7007b007e;
+  __m256i_out = __lasx_xvpackev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000008000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0003fffc0803fff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000008000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0003fffc0803fff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000fffc0000fff8;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000fffc0000fff8;
+  __m256i_out = __lasx_xvpackev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7f057f0b7f5b007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7f00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f7fff7fff7fff00;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff00000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fffffff;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000fff00000fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ff0fff005f0f;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000f0000000f;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ff0fff005f0f;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff000607f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000010017e7d1;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff000607f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001001807f1;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000007;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002555500000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  unsigned_long_int_out = __lasx_xvpickve2gr_du(__m256i_op0,0x3);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000005400;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000005400;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000007fff8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000007fff8;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0007fff8000ffff0;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0007fff8000ffff0;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffefffef00000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffefffefffefffef;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00ff00ff00ff00;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000f0000000f000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000f0000000f000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvpackod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000c8;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000c8;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000022beb03f;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffa2beb040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000022be22be;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fffa2bea2be;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000022be22be;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fffa2bea2be;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff10000fff10000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfff10000fff10000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff1000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff1000000000000;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000200000008;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op1[2]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffff00ffffff00;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff0000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff0000000000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000555500005555;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000555500005555;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000555500005555;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000555500005555;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffcfffcfffc;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000a0008;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff88ff88;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fff80000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000fff80000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fff80000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000fff80000;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fffffffffffffff;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpackod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00007fff00007fff;
+  __m256i_out = __lasx_xvpackod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffff90ffffff81;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffff90ffffff81;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000007f;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ff90ff81;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000007f;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffe81;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffe81;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001341c4000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000001000310000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000033e87ef1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000002e2100;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000011c00;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000e8f1;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000103100;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000002e00;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000002a54290;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000004290;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000004290;
+  __m256i_out = __lasx_xvpickev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0001000100010001;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbfbfbfbfbfbfbfbf;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010000;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op1[2]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_op1[1]) = 0xa020202020202020;
+  *((unsigned long*)& __m256i_op1[0]) = 0xa020202020206431;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020202031;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x2020202020202031;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0004040404000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[1]) = 0x0404000004040000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf800d0d8ffffeecf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000383fffffdf0d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf800d0d8ffffeecf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000383fffffdf0d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[2]) = 0xd0d8eecf383fdf0d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_result[0]) = 0xd0d8eecf383fdf0d;
+  __m256i_out = __lasx_xvpickev_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff000000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xdf80ff20df80ff20;
+  *((unsigned long*)& __m256i_op0[2]) = 0xdfc2ff20df80ffa7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xdf80ff20df80ff20;
+  *((unsigned long*)& __m256i_op0[0]) = 0xdfc2ff20df80ffa7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x80208020c22080a7;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x80208020c22080a7;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000040000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000400;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op1[2]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffe0047d00e00480;
+  *((unsigned long*)& __m256i_op1[0]) = 0x001fc0200060047a;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xe07de0801f20607a;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xe07de0801f20607a;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000400000004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000800080010000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000800080010000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000800080010000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000800080010000;
+  __m256i_out = __lasx_xvpickev_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[3]) = 0x9ffffd8020010001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffff9fffffff9;
+  *((unsigned long*)& __m256i_result[1]) = 0x9ffffd8020010001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffff9fffffff9;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvpickev_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000070002000a;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000060002000a;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000060002000a;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffff80000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickev_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003f8040002f607;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0002728b00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffff328dfff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x6651bfff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0003f8040002f607;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffff328dfff;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0080200000802000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0080200000802000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00200020ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x1e0000001e000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00200020ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x1e0000001e000000;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0080200000802000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0080200000802000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00800080ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00800080ffffffff;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffff8c80;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffffe40;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000040004;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0400040004000400;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0400040004000400;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xb70036db12c4007e;
+  *((unsigned long*)& __m256i_op0[2]) = 0xb7146213fc1e0049;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000fefe02fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xb71c413b199d04b5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff017e00ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x017e00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff017e00ff;
+  *((unsigned long*)& __m256i_result[3]) = 0xb70012c4b714fc1e;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00ff017e;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fe02b71c199d;
+  *((unsigned long*)& __m256i_result[0]) = 0x017e017e00ff017e;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc080ffff0049ffd2;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0049ffd2;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffeffb9ff9d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x01620133004b0032;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffb7ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00010000002fff9e;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffb5ff80ffd0ffd8;
+  *((unsigned long*)& __m256i_result[3]) = 0xc080ffff0049ffd2;
+  *((unsigned long*)& __m256i_result[2]) = 0x0002ff80ffb70000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000fffeffb9ff9d;
+  *((unsigned long*)& __m256i_result[0]) = 0x00010000002fff9e;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbabababababababa;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbabababababababa;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xc6c6c6c68787878a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000003f3f3f3c;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8787878a00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003f3fc6c68787;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f3f87870000;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fff003f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fff003f;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000007fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fffffff7fffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000007fff;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000002467db99;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003e143852;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000002467db99;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003e143852;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000246700003e14;
+  *((unsigned long*)& __m256i_result[2]) = 0x000044447bbbf777;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000246700003e14;
+  *((unsigned long*)& __m256i_result[0]) = 0x000044447bbbf777;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0006000000040000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0002000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0006000000020000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0006000000020000;
+  __m256i_out = __lasx_xvpickod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007f00ff007f00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xbff00000bff00000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xbff00000bff00000;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x9ff87ef07f7f817f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7f807f007f7f817f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x9ff87f7f7f807f7f;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x9ff87f7f7f807f7f;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffe98;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffe98;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000007f0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000007f00000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000007f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000007f00000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000007f00000000;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpickod_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x1c1c1c1c1c1c1c1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpickod_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000001a00000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000900000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000009;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0000fffe0000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000fefc0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000fffe0000;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffefd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffefdfffffefd;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000007f7f7f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000007f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000007f007f78;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffbfffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x007f00007f7f0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7f00fffb7f78fffc;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8080808080808081;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8080808080808081;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000808000008080;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000808000008081;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe0001fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff01fffffffeff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff01fffffffeff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff01fffffffeff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff01fffffffeff;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[2]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[1]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_op1[0]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x07efefefefefefee;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x07efefefefefefee;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffffffffffff;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000005;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfff3fff3fff3fff3;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00f300ff00f3;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00f300ff00f3;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00f300ff00f3;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00f300ff00f3;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00ff00040000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff000c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00040000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00ff00fe00ff00fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff00ff00fe00fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff00fe00fe;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0102;
+  *((unsigned long*)& __m256i_op0[2]) = 0x007c000000810081;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0102;
+  *((unsigned long*)& __m256i_op0[0]) = 0x007c000000810081;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x007c7fff00007fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00817fff00810000;
+  *((unsigned long*)& __m256i_result[1]) = 0x007c7fff00007fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00817fff00810000;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000001d001d;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000001d0000001d;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000e0e0e0e0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe0e0e0e0e0e0e0e0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000070007000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7000700070007000;
+  *((unsigned long*)& __m256i_result[3]) = 0xe070e000e070e000;
+  *((unsigned long*)& __m256i_result[2]) = 0xe070e000e070e000;
+  *((unsigned long*)& __m256i_result[1]) = 0xe070e000e070e000;
+  *((unsigned long*)& __m256i_result[0]) = 0xe070e000e070e000;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x003f003f003f0040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x003f003f003f0040;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x003f003f003f0040;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x003f003f003f0040;
+  *((unsigned long*)& __m256i_result[3]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_result[2]) = 0x00003f3f00004040;
+  *((unsigned long*)& __m256i_result[1]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_result[0]) = 0x00003f3f00004040;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffe98;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000064;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000064;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000e000e000e000e;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000e000e;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000e000e;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0003800400038004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000a800b000a800b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000a0080000b00;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000a0080000b00;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000a0080000b00;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000a0080000b00;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfe01fe01fd02fd02;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe01fe01fd02fd02;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000003fc03fc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x3f00c0003f00c000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x3f00c0003f00c000;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_op1[3]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x498000804843ffe0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[2]) = 0x4980008068400000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000001fffffff9;
+  *((unsigned long*)& __m256i_result[0]) = 0x4980008068400000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xf000f000f000f000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xf000f010f000f010;
+  *((unsigned long*)& __m256i_op1[1]) = 0xf000f000f000f000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xf000f010f000f010;
+  *((unsigned long*)& __m256i_result[3]) = 0x00f0000000f00010;
+  *((unsigned long*)& __m256i_result[2]) = 0xfff0ff00fff0ff10;
+  *((unsigned long*)& __m256i_result[1]) = 0x00f0000000f00010;
+  *((unsigned long*)& __m256i_result[0]) = 0xfff0ff00fff0ff10;
+  __m256i_out = __lasx_xvilvl_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvilvl_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffed;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffed;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffed;
+  __m256i_out = __lasx_xvilvl_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbff0000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xbff0800000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xbff0800000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffff90ffffff81;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffff90ffffff81;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000307fffe72e800;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020200008;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0008010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020000020200000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000001010000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101000001010000;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5555555580000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x5555555580000000;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x5);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000001fffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x5555555580000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x5555555580000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x555555553f800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x555555553f800000;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000003f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000003f00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000003f00000000;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2475cef801f0ffdd;
+  *((unsigned long*)& __m256i_op0[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_op0[0]) = 0x419cd5b11c3c5654;
+  *((unsigned long*)& __m256i_op1[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_op1[2]) = 0x2475cef801f0ffdd;
+  *((unsigned long*)& __m256i_op1[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_op1[0]) = 0x419cd5b11c3c5654;
+  *((unsigned long*)& __m256i_result[3]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_result[2]) = 0x247fe49409620040;
+  *((unsigned long*)& __m256i_result[1]) = 0x6580668200fe0002;
+  *((unsigned long*)& __m256i_result[0]) = 0x6580668200fe0002;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xf5f5f5f5f5f5f5f5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xf5f5f5f5f5f5f5f5;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000004000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xff04ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00ff00ff00ff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xff04ff00ff00ff00;
+  __m256i_out = __lasx_xvilvh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000003f00390035;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8015003f0006001f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000003f00390035;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8015003f0006001f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x80000000001529c1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80007073cadc3779;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80000000001529c1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80007073cadc3779;
+  *((unsigned long*)& __m256i_result[3]) = 0x00008000003f0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00390015003529c1;
+  *((unsigned long*)& __m256i_result[1]) = 0x00008000003f0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00390015003529c1;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0020002000200020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0020002000200020;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff0000ffff;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000002c;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000002c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000002c;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000002c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000002c0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000002c0000;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7eeefefefefefefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7eeefefefefefefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x7e00ee00fe00fe00;
+  *((unsigned long*)& __m256i_result[2]) = 0xfe00fe00fe00fe00;
+  *((unsigned long*)& __m256i_result[1]) = 0x7e00ee00fe00fe00;
+  *((unsigned long*)& __m256i_result[0]) = 0xfe00fe00fe00fe00;
+  __m256i_out = __lasx_xvilvh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ffff;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xaad5555500000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xaad5555500000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00001fff200007ef;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1f001f00000007ef;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001fff200007ef;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[2]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[1]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op0[0]) = 0x4040404040404040;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff7bfffff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff80007fe9;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff7bfffff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff80007fe9;
+  *((unsigned long*)& __m256i_result[3]) = 0x40ff40ff40ff40ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x407b40ff40ff40f1;
+  *((unsigned long*)& __m256i_result[1]) = 0x40ff40ff40ff40ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x407b40ff40ff40f1;
+  __m256i_out = __lasx_xvilvh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff02000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff1fffffff1;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001fffe0001fffa;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001fffe00018069;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001fffe0001fffa;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001fffe00018069;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff01fffffffeff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff01fffffffaff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00ff01fffffffeff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff01fffffffaff;
+  __m256i_out = __lasx_xvilvh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_b(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00001ff8d8d90000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00001ff8d8d8c000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00001ff8d8d90000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0200000202000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001ff800000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xd8d8c00000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001ff800000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xd8d8c00000000000;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x800080ff800080ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x4000c08000000080;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000080c000c080;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080ff0080;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000400080ffc080;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080ff0080;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000100;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000203ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000001ff03ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000001ff03ff;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000019ffdf403;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000011ffd97c3;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000019ffdf403;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000011ffd97c3;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000019ffdf403;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000019ffdf403;
+  __m256i_out = __lasx_xvilvh_d(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x001fffffffe00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7fffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x001f001fffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe0ffe000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x001f001fffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe0ffe000000000;
+  __m256i_out = __lasx_xvilvh_h(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvilvh_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000007070707;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0102040000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000020100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0703020000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfe02fe02fee5fe22;
+  *((unsigned long*)& __m256i_op1[0]) = 0xff49fe4200000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffefefffffcfa;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffefefffffefe;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffffff8fffffff8;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffffff8fc000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfafafafafafafafa;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000fefefe;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7ff0000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x3ff0010000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000ffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000ffff0000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000000003ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000077fff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffefe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000101;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x67eee33567eee435;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x67eee33567eee435;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff80000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op1[1]) = 0xefdfefdf00000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xefdfefdfefdfefdf;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7575ffff75757595;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7575ffff7575f575;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7575ffff75757595;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7575ffff7575f575;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000003;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op2[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op2[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op2[0]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000fffff800;
+  *((unsigned long*)& __m256i_result[3]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[2]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[1]) = 0xf800f800f800f800;
+  *((unsigned long*)& __m256i_result[0]) = 0xf800f800f800f800;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000fffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_op2[1]) = 0x8000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000ffff88ff88;
+  *((unsigned long*)& __m256i_result[3]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_result[2]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff88ff88ff880000;
+  *((unsigned long*)& __m256i_result[0]) = 0xff88ff88ff880000;
+  __m256i_out = __lasx_xvshuf_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000010000ffe1;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000101001e18;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000010000ffe1;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000101001e18;
+  *((unsigned long*)& __m256i_op1[3]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[2]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[1]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op1[0]) = 0x98111cca98111cca;
+  *((unsigned long*)& __m256i_op2[3]) = 0x000000010000ffe1;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000101001e18;
+  *((unsigned long*)& __m256i_op2[1]) = 0x000000010000ffe1;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000101001e18;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000101001e18;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000101001e18;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x80008000b3e8fef1;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x80008000802ea100;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000002;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00000000012e2110;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x012e2110012e2110;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000082a54290;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000028aa700;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000082a54290;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002a54287;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000002a542a;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000100010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007fc00000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000007fc00000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fc00000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000007fc00000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xdfffffffdfffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0xdfffffffdfffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x8000000080000000;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001000104000200;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001000104000200;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_result[3]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_result[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_result[1]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_result[0]) = 0x0004000500040005;
+  __m256i_out = __lasx_xvshuf_w(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[2]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[1]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op1[0]) = 0x555555ab555555ab;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000080008000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000007fff7fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007fff7fff;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000fffffe01fe52;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ff01ff02;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000fffffe01fe52;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ff01ff02;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000800000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000000000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000080008001;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7fff80007fff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_op2[1]) = 0x8000800080008000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000ff800000ff;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000ff800000ff;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000040;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000080040;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000080040;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffeb8649d0d6250;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffeb8649d0d6250;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfffeb6839ffffd80;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf_d(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x017e00ff017e00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff017e01fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x01fe8001b72e0001;
+  *((unsigned long*)& __m256i_op1[2]) = 0xb72e8001b72eaf12;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01fe000247639d9c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xb5308001b72eaf12;
+  *((unsigned long*)& __m256i_result[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00ff00ff017e00ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x017e00ff017e01fe;
+  *((unsigned long*)& __m256i_result[0]) = 0x00ff00ff017e00ff;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffff00000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000055ff01f90ab5;
+  *((unsigned long*)& __m256i_op0[2]) = 0xaa95eafffec6e01f;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000055ff01f90ab5;
+  *((unsigned long*)& __m256i_op0[0]) = 0xaa95eafffec6e01f;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfec6e01ffec6e01f;
+  *((unsigned long*)& __m256i_result[2]) = 0xfec6e01ffec6e01f;
+  *((unsigned long*)& __m256i_result[1]) = 0xfec6e01ffec6e01f;
+  *((unsigned long*)& __m256i_result[0]) = 0xfec6e01ffec6e01f;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fefffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffdfff80;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000b7;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffdfff80;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000016e00;
+  *((unsigned long*)& __m256i_result[3]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_result[2]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_result[1]) = 0xffdfff80ffdfff80;
+  *((unsigned long*)& __m256i_result[0]) = 0xffdfff80ffdfff80;
+  __m256i_out = __lasx_xvperm_w(__m256i_op0,__m256i_op1);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007ffffffff7ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x49d8080067f4f81f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00007f00fffff7ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xd8490849f467f867;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0xb7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00007ffffffff7ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x49d8080067f4f81f;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7ffff7ff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x080008000800f81f;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0xa8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x1e1800001e180000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1e18000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x1e1800001e180000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1e18000000000000;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0xfe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0x64);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_result[3]) = 0xc5c4c5c5c5c5c5c5;
+  *((unsigned long*)& __m256i_result[2]) = 0xc5c545c545c545c5;
+  *((unsigned long*)& __m256i_result[1]) = 0xc5c4c5c5c5c5c5c5;
+  *((unsigned long*)& __m256i_result[0]) = 0xc5c545c545c545c5;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0x3d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_op0[0]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_result[3]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_result[2]) = 0x45c5c5c545c5c5c5;
+  *((unsigned long*)& __m256i_result[1]) = 0xc5c5c5c4c5c5c5c4;
+  *((unsigned long*)& __m256i_result[0]) = 0x45c5c5c545c5c5c5;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0xb0);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0xdb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000f9f900;
+  *((unsigned long*)& __m256i_op0[2]) = 0x79f9f9f900f9f9e0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000f9f900;
+  *((unsigned long*)& __m256i_op0[0]) = 0x79f9f9f900f9f900;
+  *((unsigned long*)& __m256i_result[3]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_result[2]) = 0x79f9f9f900000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00f9f90079f9f9f9;
+  *((unsigned long*)& __m256i_result[0]) = 0x79f9f9f900000000;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0x97);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0xf7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0101010183f95466;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x01010101d58efe94;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0101010183f95466;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x01010101d58efe94;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0xa7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0x3a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0x95);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000007aff7c00;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffd017d00;
+  *((unsigned long*)& __m256i_result[3]) = 0x7aff7c0000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xfd017d0000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7aff7c0000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xfd017d0000000000;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0xb3);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_result[3]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_result[2]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_result[1]) = 0xc3f0c3f0c3f0c3f0;
+  *((unsigned long*)& __m256i_result[0]) = 0xc3f0c3f0c3f0c3f0;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0x3c);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0xd9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0xf4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffff0000;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0xa7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffb3b4;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffff5ffff4738;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffb3b4;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffff5ffff4738;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0xee);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00001fff00001fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00001fff00001fff;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffff80be0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000f0f0002;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffff80be0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000f1002;
+  *((unsigned long*)& __m256i_op1[3]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_result[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x80000000ff800000;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0xdb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_result[3]) = 0xff81ff7dffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffff81ff7d;
+  *((unsigned long*)& __m256i_result[1]) = 0xff81ff7dffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffff81ff7d;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0x28);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op1[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000080000009;
+  *((unsigned long*)& __m256i_op1[0]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x43ef878780000009;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x43ef878780000009;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0x36);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0x5a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0x2f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0x6f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0x5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_b(__m256i_op0,0x23);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0xd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000020ff790020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000002000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000020ff790020;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000002000000020;
+  __m256i_out = __lasx_xvshuf4i_w(__m256i_op0,0xa5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[3]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[2]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[1]) = 0xff1cff1cff1cff1c;
+  *((unsigned long*)& __m256i_result[0]) = 0xff1cff1cff1cff1c;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0xdc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffff0020;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff8001ffff0001;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff8001ffff8001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff8001ffff8001;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0x6e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0x9f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff00017fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op1[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_result[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_result[0]) = 0x04f104f104f504ed;
+  __m256i_out = __lasx_xvshuf4i_d(__m256i_op0,__m256i_op1,0x7e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[2]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[0]) = 0x04f104f104f504ed;
+  *((unsigned long*)& __m256i_result[3]) = 0x0002ffff00020002;
+  *((unsigned long*)& __m256i_result[2]) = 0x04f504f104f504f5;
+  *((unsigned long*)& __m256i_result[1]) = 0x0002ffff00020002;
+  *((unsigned long*)& __m256i_result[0]) = 0x04f504f104f504f5;
+  __m256i_out = __lasx_xvshuf4i_h(__m256i_op0,0x65);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_op0[2]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_op0[0]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[3]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[2]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[1]) = 0xe9e9e9e9e9e9e9e9;
+  *((unsigned long*)& __m256i_result[0]) = 0xe9e9e9e9e9e9e9e9;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0xf7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000100000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000100000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000100000001;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0x55);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0x78);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0x4a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0x2d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000ffffffff;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0xa9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0xc7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_result[1]) = 0x05ea05ea05ea05ec;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000001;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0x49);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0000fffd0004;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0002fffd;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff0000fffd0004;
+  __m256i_out = __lasx_xvpermi_d(__m256i_op0,0xcb);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[1]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7575757575757575;
+  *((unsigned long*)& __m256i_result[3]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[2]) = 0x7fff7fff7fff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0x7fe37fe3001d001d;
+  *((unsigned long*)& __m256i_result[0]) = 0x7fff7fff7fff0000;
+  __m256i_out = __lasx_xvpermi_q(__m256i_op0,__m256i_op1,0x22);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00ff00ff00ff00ff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffff0000ffff0000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpermi_q(__m256i_op0,__m256i_op1,0xca);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000019001c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000019001c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000000001fe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvpermi_q(__m256i_op0,__m256i_op1,0xb9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0cc08723ff900001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xcc9b89f2f6cef440;
+  int_out = __lasx_xvpickve2gr_w(__m256i_op0,0x7);
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000020202;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000002020202;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000020200;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x25);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfe02fe02fee5fe22;
+  *((unsigned long*)& __m256i_op0[0]) = 0xff49fe4200000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xbf28b0686066be60;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xff49fe4200000000;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0xbf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffff5f5c;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x00000000000000fe;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x00000000000000fe;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0xfe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0x9f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0xc4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0x99);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000fffffefc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000fffffffe0;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffefffffefc;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000fffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000fffffffff;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x8f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op1[2]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_op1[1]) = 0xe161616161616161;
+  *((unsigned long*)& __m256i_op1[0]) = 0xe161616161614e60;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000061;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000061;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0x83);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[2]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000044444443;
+  *((unsigned long*)& __m256i_op1[0]) = 0x7bbbbbbbf7777778;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000007bbbbbbb;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000007bbbbbbb;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x8d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x66);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0xda);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00007f7f00007f7f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffff900000800;
+  *((unsigned long*)& __m256i_result[3]) = 0x00007f7f00007f00;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0x87);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000000;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0xa5);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2b2a292827262524;
+  *((unsigned long*)& __m256i_op0[2]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_op0[1]) = 0x2b2a292827262524;
+  *((unsigned long*)& __m256i_op0[0]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000023;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_result[2]) = 0x232221201f1e1d1c;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000027262524;
+  *((unsigned long*)& __m256i_result[0]) = 0x232221201f1e1d1c;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0xbd);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000080008001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000080000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000080000000;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0x33);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0xb8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc6ffc6003a003a;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff0000;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x54);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0xe7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000000430207f944;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x7e);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[2]) = 0xff00010001000100;
+  *((unsigned long*)& __m256i_result[1]) = 0x0100010001000100;
+  *((unsigned long*)& __m256i_result[0]) = 0xff00010001000100;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0x7b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00003f3f00003f3f;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffff0000000f;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff0000000d;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0x56);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ff0100ff0000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000ff01;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000ff0000;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000ff01;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0x6f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000001010100;
+  *((unsigned long*)& __m256i_op1[2]) = 0x8000000000000405;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000001010100;
+  *((unsigned long*)& __m256i_op1[0]) = 0x8000000000000405;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000600000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000600000006;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0xf6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000007f8000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000007f8000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0x7b);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000fff8ffc0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ff00fff8ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000fff8fff8;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ff00fff8ffc0;
+  __m256i_out = __lasx_xvextrins_b(__m256i_op0,__m256i_op1,0x82);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000002000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000002000000;
+  __m256i_out = __lasx_xvextrins_h(__m256i_op0,__m256i_op1,0x43);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0001ffff0001ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvextrins_d(__m256i_op0,__m256i_op1,0x7);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffeb664007ffd61;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffe97a1df5b41b0;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffeb664007ffd61;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffe97a1df5b41b0;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffff007ffd61;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffe97c020010001;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffff007ffd61;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffe97c020010001;
+  __m256i_out = __lasx_xvextrins_w(__m256i_op0,__m256i_op1,0x62);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-str-manipulate.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-str-manipulate.c
new file mode 100644
index 00000000000..c6148bd4990
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-str-manipulate.c
@@ -0,0 +1,712 @@
+/* { dg-do run } */
+/* { dg-options "-mlasx -w" } */
+/* { dg-timeout 500 } */
+#include "../simd_correctness_check.h"
+#include <lasxintrin.h>
+
+int main ()
+{
+  __m256i __m256i_op0, __m256i_op1, __m256i_op2, __m256i_out, __m256i_result;
+  __m256 __m256_op0, __m256_op1, __m256_op2, __m256_out, __m256_result;
+  __m256d __m256d_op0, __m256d_op1, __m256d_op2, __m256d_out, __m256d_result;
+
+  int int_op0, int_op1, int_op2, int_out, int_result, i=1, fail;
+  long int long_op0, long_op1, long_op2, lont_out, lont_result;
+  long int long_int_out, long_int_result;
+  unsigned int unsigned_int_out, unsigned_int_result;
+  unsigned long int unsigned_long_int_out, unsigned_long_int_result;
+
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000000e7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ff00ff00000007;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000007;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000080000;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[2]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00007f7f00000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00007f7f00007fff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000007f00340040;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000007f000000ff;
+  *((unsigned long*)& __m256i_result[3]) = 0x2020202020202020;
+  *((unsigned long*)& __m256i_result[2]) = 0x2020202020200008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008010101010101;
+  *((unsigned long*)& __m256i_result[0]) = 0x0101010101010101;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000ffff00000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000ffff0000ffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000ffff00000008;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[2]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[1]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op1[0]) = 0x03f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op2[3]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op2[2]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op2[1]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_op2[0]) = 0xf7f7f7f7f7f7f7f7;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_result[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffff10;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0101010101010101;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[3]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xbfbfbfbfbfbfbfbf;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xbfbfbfbfbfbfbfbf;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000010;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfefefefefefefefe;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000000000f0;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000080;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000010;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffe1;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffff10;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x007f007bfffffffb;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x007f007bfffffffb;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000010000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffdbbbcf;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffb8579f;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffdbbbcf;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffb8579f;
+  *((unsigned long*)& __m256i_op2[3]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op2[2]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op2[1]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_op2[0]) = 0xfffcfffcfffcfffc;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000001;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000c040c0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000c040c0;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00000004843ffdff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffe000ffffffff08;
+  *((unsigned long*)& __m256i_result[1]) = 0xffe000ffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffe000ffffffff08;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op2[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op2[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffff0000;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffff0000;
+  __m256i_out = __lasx_xvfrstp_h(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0004000400040004;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0004000500040005;
+  *((unsigned long*)& __m256i_op2[3]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_op2[2]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256i_op2[1]) = 0x00007fff00007fff;
+  *((unsigned long*)& __m256i_op2[0]) = 0x00007fff00000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffff10;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffff10;
+  __m256i_out = __lasx_xvfrstp_b(__m256i_op0,__m256i_op1,__m256i_op2);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x38a966b31be83ee9;
+  *((unsigned long*)& __m256i_op0[2]) = 0x5f6108dc25b8e028;
+  *((unsigned long*)& __m256i_op0[1]) = 0xf41a56e8a20878d7;
+  *((unsigned long*)& __m256i_op0[0]) = 0x683b8b67e20c8ee5;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffcd42ffffecc0;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000475ffff4c51;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000740dffffad17;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00003f4bffff7130;
+  *((unsigned long*)& __m256i_result[3]) = 0x38a966b31be83ee9;
+  *((unsigned long*)& __m256i_result[2]) = 0x5f6108dc25b80001;
+  *((unsigned long*)& __m256i_result[1]) = 0xf41a56e8a20878d7;
+  *((unsigned long*)& __m256i_result[0]) = 0x683b8b67e20c0001;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x10);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x1000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x1000000000000000;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000000004fb;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffff0008;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffff0008;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0xffff0008ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffff0008ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00ffffff1e9e9e9e;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffff9e9eb09e;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00ffffff1e9e9e9e;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffff9e9eb09e;
+  *((unsigned long*)& __m256i_result[3]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_result[2]) = 0xffc00000ffc0ffc0;
+  *((unsigned long*)& __m256i_result[1]) = 0xffc0ffc0ffc0ffc0;
+  *((unsigned long*)& __m256i_result[0]) = 0xffc00000ffc0ffc0;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0xa);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[2]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[1]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_op1[0]) = 0xfffffffffffffffe;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x19);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0xf);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000165e0000480d;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000165e0000480d;
+  *((unsigned long*)& __m256i_op1[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_op1[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffd8ffc7ffdaff8a;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_result[2]) = 0x000016000000480d;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000226200005111;
+  *((unsigned long*)& __m256i_result[0]) = 0x000016000000480d;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xe800c0d8fffeeece;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff383efffedf0c;
+  *((unsigned long*)& __m256i_op0[1]) = 0xe800c0d8fffeeece;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff383efffedf0c;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xe800c000fffeeece;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff383efffedf0c;
+  *((unsigned long*)& __m256i_result[1]) = 0xe800c000fffeeece;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff383efffedf0c;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0xc);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000008;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000008;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffff000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x8000000080000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x80000000ff800000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffff000200000000;
+  *((unsigned long*)& __m256i_result[1]) = 0xff00000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffff000200000000;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x4);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffff00ffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffff00ffffffff;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x14);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0x7c007c0080008000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0x7c007c0080008000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[2]) = 0x7c00000880008000;
+  *((unsigned long*)& __m256i_result[1]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[0]) = 0x7c00000880008000;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x1a);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[2]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op0[1]) = 0x457db03e457db03e;
+  *((unsigned long*)& __m256i_op0[0]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x000f000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0008b03e457db03e;
+  *((unsigned long*)& __m256i_result[2]) = 0x457db03e45a87310;
+  *((unsigned long*)& __m256i_result[1]) = 0x0008b03e457db03e;
+  *((unsigned long*)& __m256i_result[0]) = 0x457db03e45a87310;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x1f);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[2]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[1]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op0[0]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000002000000020;
+  *((unsigned long*)& __m256i_result[3]) = 0x000000000008000b;
+  *((unsigned long*)& __m256i_result[2]) = 0x000000000000000b;
+  *((unsigned long*)& __m256i_result[1]) = 0x000000000008000b;
+  *((unsigned long*)& __m256i_result[0]) = 0x000000000000000b;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x1d);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000dfffffff1;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000cfffffff3;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000dfffffff1;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000cfffffff3;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x16);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op0[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_result[2]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_result[1]) = 0xffffffffffffff00;
+  *((unsigned long*)& __m256i_result[0]) = 0xffffffffffffffff;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x8);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000001000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000010001;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000001000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000010001;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x9);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0008000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0008000000000000;
+  __m256i_out = __lasx_xvfrstpi_h(__m256i_op0,__m256i_op1,0x13);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[1]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op0[0]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_op1[3]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x000000007fff0000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000008000;
+  *((unsigned long*)& __m256i_result[3]) = 0xff01fffe00000001;
+  *((unsigned long*)& __m256i_result[2]) = 0xfffffffe00000001;
+  *((unsigned long*)& __m256i_result[1]) = 0xff01fffe00000001;
+  *((unsigned long*)& __m256i_result[0]) = 0xfffffffe00000001;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0xe);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[3]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[2]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_op1[1]) = 0xffffffffffffffff;
+  *((unsigned long*)& __m256i_op1[0]) = 0x00000000ffffffff;
+  *((unsigned long*)& __m256i_result[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[0]) = 0x0000000000000000;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x6);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  *((unsigned long*)& __m256i_op0[3]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op0[2]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op0[1]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op0[0]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_op1[3]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[2]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[1]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_op1[0]) = 0x0000000000000000;
+  *((unsigned long*)& __m256i_result[3]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_result[2]) = 0x10ffffff10000006;
+  *((unsigned long*)& __m256i_result[1]) = 0x0fffffff10000006;
+  *((unsigned long*)& __m256i_result[0]) = 0x10ffffff10000006;
+  __m256i_out = __lasx_xvfrstpi_b(__m256i_op0,__m256i_op1,0x17);
+  ASSERTEQ_64(__LINE__, __m256i_result, __m256i_out);
+
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvldrepl.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvldrepl.c
new file mode 100644
index 00000000000..4a2ea243d3d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvldrepl.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */                         
+/* { dg-options "-O3 -mlasx" } */
+/* { dg-final { scan-assembler-times "xvldrepl.w" 2} } */
+
+#define N 258
+
+float a[N], b[N], c[N];
+
+void test() {
+  for(int i = 0; i < 256; i++) {
+      a[i] = c[0] * b[i] + c[1];
+  }
+}
diff --git a/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvstelm.c b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvstelm.c
new file mode 100644
index 00000000000..fad69fad4f8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/loongarch/vector/lasx/lasx-xvstelm.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mlasx" } */
+/* { dg-final { scan-assembler-times "xvstelm.w" 8} } */
+
+#define LEN 256
+
+float a[LEN], b[LEN], c[LEN];
+
+void test(){
+    for (int i = 0; i < LEN; i += 2)
+      a[i] = b[i] + c[i];
+}
-- 
2.36.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target.
  2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
                   ` (7 preceding siblings ...)
  2023-07-18 11:06 ` [PATCH v2 8/8] LoongArch: Add Loongson ASX " Chenghui Pan
@ 2023-07-18 12:26 ` Xi Ruoyao
  2023-07-19  1:14   ` PanChenghui
  8 siblings, 1 reply; 11+ messages in thread
From: Xi Ruoyao @ 2023-07-18 12:26 UTC (permalink / raw)
  To: Chenghui Pan, gcc-patches; +Cc: i, chenglulu, xuchenghua

On Tue, 2023-07-18 at 19:06 +0800, Chenghui Pan wrote:
> Lulu Cheng (8):
>   LoongArch: Added Loongson SX vector directive compilation framework.
>   LoongArch: Added Loongson SX base instruction support.
>   LoongArch: Added Loongson SX directive builtin function support.
>   LoongArch: Added Loongson ASX vector directive compilation framework.
>   LoongArch: Added Loongson ASX base instruction support.
>   LoongArch: Added Loongson ASX directive builtin function support.

Let's always use "Add".

>   LoongArch: Add Loongson SX directive test cases.
>   LoongArch: Add Loongson ASX directive test cases.

Have you tested this series by bootstrapping and regtesting GCC with
BOOT_CFLAGS="-O2 -ftree-vectorize -fno-vect-cost-model -mlasx" and
BOOT_CFLAGS="-O3 -mlasx"?  This may catch some mistakes early.

And I'll rebuild the entire system with these GCC patches and -mlasx in
Aug (after Glibc-2.38 release) as a field test too.

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target.
  2023-07-18 12:26 ` [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Xi Ruoyao
@ 2023-07-19  1:14   ` PanChenghui
  0 siblings, 0 replies; 11+ messages in thread
From: PanChenghui @ 2023-07-19  1:14 UTC (permalink / raw)
  To: Xi Ruoyao, gcc-patches; +Cc: i, chenglulu, xuchenghua

Got it, I will fix the commit info in next version.

I haven't test GCC with these flags before, so I will try to build and
run regression test with BOOT_CFLAGS later. 

On Tue, 2023-07-18 at 20:26 +0800, Xi Ruoyao wrote:
> On Tue, 2023-07-18 at 19:06 +0800, Chenghui Pan wrote:
> > Lulu Cheng (8):
> >   LoongArch: Added Loongson SX vector directive compilation
> > framework.
> >   LoongArch: Added Loongson SX base instruction support.
> >   LoongArch: Added Loongson SX directive builtin function support.
> >   LoongArch: Added Loongson ASX vector directive compilation
> > framework.
> >   LoongArch: Added Loongson ASX base instruction support.
> >   LoongArch: Added Loongson ASX directive builtin function support.
> 
> Let's always use "Add".
> 
> >   LoongArch: Add Loongson SX directive test cases.
> >   LoongArch: Add Loongson ASX directive test cases.
> 
> Have you tested this series by bootstrapping and regtesting GCC with
> BOOT_CFLAGS="-O2 -ftree-vectorize -fno-vect-cost-model -mlasx" and
> BOOT_CFLAGS="-O3 -mlasx"?  This may catch some mistakes early.
> 
> And I'll rebuild the entire system with these GCC patches and -mlasx
> in
> Aug (after Glibc-2.38 release) as a field test too.
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-07-19  1:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-18 11:06 [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 1/8] LoongArch: Added Loongson SX vector directive compilation framework Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 2/8] LoongArch: Added Loongson SX base instruction support Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 3/8] LoongArch: Added Loongson SX directive builtin function support Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 4/8] LoongArch: Added Loongson ASX vector directive compilation framework Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 5/8] LoongArch: Added Loongson ASX base instruction support Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 6/8] LoongArch: Added Loongson ASX directive builtin function support Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 7/8] LoongArch: Add Loongson SX directive test cases Chenghui Pan
2023-07-18 11:06 ` [PATCH v2 8/8] LoongArch: Add Loongson ASX " Chenghui Pan
2023-07-18 12:26 ` [PATCH v2 0/8] Add Loongson SX/ASX instruction support to LoongArch target Xi Ruoyao
2023-07-19  1:14   ` PanChenghui

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).